Tuesday, 29 July 2014

Mobile Testing Job Opportunities

Mobile testing is booming, and that's a good thing! But with multiple operating systems and tens of thousands of devices on the market, how can you ensure your app delights your users everywhere, every time, at every turn?

Applause is the latest tool which arms you with a 360° approach to mobile app quality to win in today's ultra-competitive apps economy. From in-the-wild testing services and mobile app quality tools, Applause makes it easier than ever to create winning mobile apps every time.

In-the-Wild Testing
Lab environments are places of perfect connectivity, limited devices and up-to-date operating systems. Sadly, that's not the world your app is going to encounter outside the walls of your test lab.
In-the-wild testing ensures your app works well in real world situations, such as:
On a Range of Devices: The number of mobile devices grows everyday, bringing with it a range of screen sizes, resolutions, processing power, etc. You'd need a warehouse to store all these devices.
On Real Devices: Emulators and simulators simply don't cut it. Fingers on touchscreens are much different than mouse clicks and real devices aren't clean, well-maintained systems. But to house all the devices you would need for testing, you'd need unlimited space and an astronomical budget.
On-Location: Connectivity changes with network and location, but you can't mimic those fluctuating conditions in a lab.
Outdated Software: Not everyone regularly updates their operating system. Some Android users might not even have access to the newest version. How does your app behave on older OS versions?


Further Readings:
Infibeam for 23% discount:
http://www.infibeam.com/Books/mobile-software-testing-narayanan-palani/9789383952144.html

Kindle Edition:
http://www.amazon.com/Mobile-Software-Testing-Narayanan-Palani-ebook/dp/B00M7G7ZVW/

Experitest

Experitest is the famous tool for mobile test automation, manual testing and monitoring.
It supports all mobile operating systems like iOS, Android, WindowsPhone8 and Blackberry.
How to set up a project in SeeTest?
Step 1: In SeeTest, select New Project from the File menu.
Step 2: A pop-up will appear. Type in the project name and then click OK.
How to Record?
Step 1: Open the tested application on your desktop (in our example – the tested application is Skype).
Step 2: In SeeTest, go to the Script tab and click on the Record button.
Step 3: Go to the tested application, and do the exact sequence of actions you want in the test. For example, click on the application minimized window to launch Skype, click on Contacts tab, click on contact “Simon”, click the “Call” (green phone), click “hang up” (red phone), click the minimize Skype window icon. 

Few best practices when recording:
(1) Click at the center of the image/icon/link
(2) Record slowly

Step 4: Return to SeeTest and click on the Stop Record button.
Step 5: You will see a progress bar indicating the record data is being analyzed, and the test script will then appear in the Script area.
Step 6: Before clicking the Play button to run the script:

(1) Edit the "Set Application Title" so that it includes only the main application name (e.g. delete things that may change, for example if the application name   is "Skype – xxx)", just leave "Skype" as the application name)
 
(2) Set the tested application in the same starting mode as the one you have recorded the test on:
a)       Same start screen
b)       Same items in the tested application are highlighted or not highlighted, as was the situation when you recorded the test.
c)       Text boxes are empty/filled in with same text as was when you recorded
d)       Same view settings (e.g. if browser, make sure the same toolbars – both side tool bars and top/down toolbars)
e)       Verify the window name of the tested application is the same as when you recorded the test (appears in the Script at the “Set Application Title” command)
(3) Check that the elements have been extracted accurately and if not Edit/ReLearn the element:
a)       Edit an element that was extracted too widely (e.g. several buttons instead of one) or too narrowly (e.g. part of the image, image cut in the middle)
b)       ReLearn an Element  that was not identified during runtime because it may have several appearances (such as with or without mouse over) and was extracted in the wrong appearance. For example, sometimes, an element is captured when it is in mouse-over mode (highlighted) and you need to ReLearn it in the non mouse-over mode (not highlighted). To do so, set the tested application screen in the right mode, bring it to the clipboard of the computer using PrintScreen. Then go to SeeTest, right-click on the element and select ReLearn and then select Clipboard from the dropdown list. The corrected element will appear. Click OK to finish.
c)       If you have ReLearnt an  element as described in b above but it has still not been identified during runtime, reduce sensitivity of the element that has not been identified (reduce by 5% every time and try to run the test then and see if it has been identified successfully)
(4) Add, if there is need, synchronization commands (such as Sleep, WaitForElement) to ensure that each test step is executed on the correct application window (follow the Edit> Add Element&Command section for explanation how to add a command)

Step 7: Once the test has been executed, you will receive a report indicating for each test step if it succeeded or failed, including a screenshot of the tested application in run time.
Step 8: You can run the test script from QTP, TestComplete, RFT, JUnit, Python, Perl or C# and other frameworks. To do so, click the Export code button, copy the code into your framework and run it from the framework. (for detailed explanation please refer to our Export Code section.)

Further Readings:



 Kindle Edition:
http://www.amazon.com/Mobile-Software-Testing-Narayanan-Palani-ebook/dp/B00M7G7ZVW/

Infibeam for 23% discount:
http://www.infibeam.com/Books/mobile-software-testing-narayanan-palani/9789383952144.html

QTP Automation Testing



HP UFT Mobile extends QTP tool for Cloud based application tests in test automation. It reuses the vbscript code snippets like components as callable and tester can use any automation frameworks to test mobile based applications across the platforms.

Test cases can be written in HP Application Life Cycle Management-test plan tab and write test scripts in UFT so that the test case can call the script during test execution.
Test script used HP UFT Mobile to connect to mobile devices and provide the test results
It is also possible to connect HP Load Runner and HP Performance Center for Non Functional Testing based tests.

The one book which talks about HP UFT Mobile scripting is:

Also testers can refer the following book for QTP Scripting:

http://knowledgeinbox.com/demos/QuickTestProfessional_Book_preview.pdf

Following the book best describing about Automation of Mobile App:

Infibeam for 23% discount:
http://www.infibeam.com/Books/mobile-software-testing-narayanan-palani/9789383952144.html

Kindle Edition:
http://www.amazon.com/Mobile-Software-Testing-Narayanan-Palani-ebook/dp/B00M7G7ZVW/

Tuesday, 14 January 2014

ISTQB -LATEST PRACTICE QUESTION PAPER 10

Questions
1    We split testing into distinct stages primarily because:
a)    Each test stage has a different purpose.
b)    It is easier to manage testing in stages.
c)    We can run different tests in different environments.
d)    The more stages we have, the better the testing.
2    Which of the following is likely to benefit most from the use of test tools providing test capture and replay facilities?
a)    Regression testing
b)    Integration testing
c)    System testing
d)    User acceptance testing
3    Which of the following statements is NOT correct?
a)    A minimal test set that achieves 100% LCSAJ coverage will also achieve 100% branch coverage.
b)    A minimal test set that achieves 100% path coverage will also achieve 100% statement coverage.
c)    A minimal test set that achieves 100% path coverage will generally detect more faults than one that achieves 100% statement coverage.
d)    A minimal test set that achieves 100% statement coverage will generally detect more faults than one that achieves 100% branch coverage.
4    Which of the following requirements is testable?
a)    The system shall be user friendly.
b)    The safety-critical parts of the system shall contain 0 faults.
c)    The response time shall be less than one second for the specified design load.
d)    The system shall be built to be portable.
5    Analise the following highly simplified procedure:
Ask: “What type of ticket do you require, single or return?”
IF the customer wants ‘return’
Ask: “What rate, Standard or Cheap-day?”
IF the customer replies ‘Cheap-day’
Say: “That will be £11:20″
ELSE
Say: “That will be £19:50″
ENDIF
ELSE
Say: “That will be £9:75″
ENDIF
Now decide the minimum number of tests that are needed to ensure that all
the questions have been asked, all combinations have occurred and all
replies given.
a)    3
b)    4
c)    5
d)    6
6    Error guessing:
a)    supplements formal test design techniques.
b)    can only be used in component, integration and system testing.
c)    is only performed in user acceptance testing.
d)    is not repeatable and should not be used.
7    Which of the following is NOT true of test coverage criteria?
a)    Test coverage criteria can be measured in terms of items exercised by a test suite.
b)    A measure of test coverage criteria is the percentage of user requirements covered.
c)    A measure of test coverage criteria is the percentage of faults found.
d)    Test coverage criteria are often used when specifying test completion criteria.
8    In prioritizing what to test, the most important objective is to:
a)    find as many faults as possible.
b)    test high risk areas.
c)    obtain good test coverage.
d)    test whatever is easiest to test.
9    Given the following sets of test management terms (v-z), and activity descriptions (1-5), which one of the following best pairs the two sets?
v – test control
w – test monitoring
x – test estimation
y – incident management
z – configuration control
1 -   calculation of required test resources
2 -   maintenance of record of test results
3 -   re-allocation of resources when tests overrun
4 -   report on deviation from test plan
5 -   tracking of anomalous test results
a)    v-3,w-2,x-1,y-5,z-4
b)    v-2,w-5,x-1,y-4,z-3
c)    v-3,w-4,x-1,y-5,z-2
d)    v-2,w-1,x-4,y-3,z-5
10    Which one of the following statements about system testing is NOT true?
a)    System tests are often performed by independent teams.
b)    Functional testing is used more than structural testing.
c)    Faults found during system tests can be very expensive to fix.
d)    End-users should be involved in system tests.
11    Which of the following is false?
a)    Incidents should always be fixed.
b)    An incident occurs when expected and actual results differ.
c)    Incidents can be analysed to assist in test process improvement.
d)    An incident can be raised against documentation.
12    Enough testing has been performed when:
a)    time runs out.
b)    the required level of confidence has been achieved.
c)    no more faults are found.
d)    the users won’t find any serious faults.
13    Which of the following is NOT true of incidents?
a)    Incident resolution is the responsibility of the author of the software under test.
b)    Incidents may be raised against user requirements.
c)    Incidents require investigation and/or correction.
d)    Incidents are raised when expected and actual results differ.
14    Which of the following is not described in a unit test standard?
a)    syntax testing
b)    equivalence partitioning
c)    stress testing
d)    modified condition/decision coverage
15    Which of the following is false?
a)    In a system two different failures may have different severities.
b)    A system is necessarily more reliable after debugging for the removal of a fault.
c)    A fault need not affect the reliability of a system.
d)    Undetected errors may lead to faults and eventually to incorrect behaviour.
16    Which one of the following statements, about capture-replay tools, is NOT correct?
a)    They are used to support multi-user testing.
b)    They are used to capture and animate user requirements.
c)    They are the most frequently purchased types of CAST tool.
d)    They capture aspects of user behavior.
17    How would you estimate the amount of re-testing likely to be required?
a)    Metrics from previous similar projects
b)    Discussions with the development team
c)    Time allocated for regression testing
d)    a & b
18    Which of the following is true of the V-model?
a)    It states that modules are tested against user requirements.
b)    It only models the testing phase.
c)    It specifies the test techniques to be used.
d)    It includes the verification of designs.
19    The oracle assumption:
a)    is that there is some existing system against which test output may be checked.
b)    is that the tester can routinely identify the correct outcome of a test.
c)    is that the tester knows everything about the software under test.
d)    is that the tests are reviewed by experienced testers.
20    Which of the following characterizes the cost of faults?
a)    They are cheapest to find in the early development phases and the most expensive to fix in the latest test phases.
b)    They are easiest to find during system testing but the most expensive to fix then.
c)    Faults are cheapest to find in the early development phases but the most expensive to fix then.
d)    Although faults are most expensive to find during early development phases, they are cheapest to fix then.
21    Which of the following should NOT normally be an objective for a test?
a)    To find faults in the software.
b)    To assess whether the software is ready for release.
c)    To demonstrate that the software doesn’t work.
d)    To prove that the software is correct.
22    Which of the following is a form of functional testing?
a)    Boundary value analysis
b)    Usability testing
c)    Performance testing
d)    Security testing
23    Which of the following would NOT normally form part of a test plan?
a)    Features to be tested
b)    Incident reports
c)    Risks
d)    Schedule
24    Which of these activities provides the biggest potential cost saving from the use of CAST?
a)    Test management
b)    Test design
c)    Test execution
d)    Test planning
25    Which of the following is NOT a white box technique?
a)    Statement testing
b)    Path testing
c)    Data flow testing
d)    State transition testing
26    Data flow analysis studies:
a)    possible communications bottlenecks in a program.
b)    the rate of change of data values as a program executes.
c)    the use of data on paths through the code.
d)    the intrinsic complexity of the code.
27    In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free. The next £1500 is taxed at 10%
The next £28000 is taxed at 22%
Any further amount is taxed at 40%
To the nearest whole pound, which of these is a valid Boundary Value Analysis test case?
a)    £1500
b)    £32001
c)    £33501
d)    £28000
28    An important benefit of code inspections is that they:
a)    enable the code to be tested before the execution environment is ready.
b)    can be performed by the person who wrote the code.
c)    can be performed by inexperienced staff.
d)    are cheap to perform.
29    Which of the following is the best source of Expected Outcomes for User Acceptance Test scripts?
a)    Actual results
b)    Program specification
c)    User requirements
d)    System specification
30    What is the main difference between a walkthrough and an inspection?
a)    An inspection is lead by the author, whilst a walkthrough is lead by a trained moderator.
b)    An inspection has a trained leader, whilst a walkthrough has no leader.
c)    Authors are not present during inspections, whilst they are during walkthroughs.
d)    A walkthrough is lead by the author, whilst an inspection is lead by a trained moderator.
31    Which one of the following describes the major benefit of verification early in the life cycle?
a)    It allows the identification of changes in user requirements.
b)    It facilitates timely set up of the test environment.
c)    It reduces defect multiplication.
d)    It allows testers to become involved early in the project.
32    Integration testing in the small:
a)    tests the individual components that have been developed.
b)    tests interactions between modules or subsystems.
c)    only uses components that form part of the live system.
d)    tests interfaces to other systems.
33    Static analysis is best described as:
a)    the analysis of batch programs.
b)    the reviewing of test plans.
c)    the analysis of program code.
d)    the use of black box testing.
34     Alpha testing is:
a)    post-release testing by end user representatives at the developer’s site.
b)    the first testing that is performed.
c)    pre-release testing by end user representatives at the developer’s site.
d)    pre-release testing by end user representatives at their sites.
35    A failure is:
a)    found in the software; the result of an error.
b)    departure from specified behavior.
c)    an incorrect step, process or data definition in a computer program.
d)    a human action that produces an incorrect result.
36    In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free. The next £1500 is taxed at 10%
The next £28000 is taxed at 22%
Any further amount is taxed at 40%
Which of these groups of numbers would fall into the same equivalence class?
a)    £4800; £14000; £28000
b)    £5200; £5500; £28000
c)    £28001; £32000; £35000
d)    £5800; £28000; £32000
37    The most important thing about early test design is that it:
a)    makes test preparation easier.
b)    means inspections are not required.
c)    can prevent fault multiplication.
d)    will find all faults.
38    Which of the following statements about reviews is true?
a)    Reviews cannot be performed on user requirements specifications.
b)    Reviews are the least effective way of testing code.
c)    Reviews are unlikely to find faults in test plans.
d)    Reviews should be performed on specifications, code, and test plans.
39    Test cases are designed during:
a)    test recording.
b)    test planning.
c)    test configuration.
d)    test specification.
40    A configuration management system would NOT normally provide:
a)    linkage of customer requirements to version numbers.
b)    facilities to compare test results with expected results.
c)    the precise differences in versions of software component source code.
d)    restricted access to the source code library.
Answers for above questions:
Question Answer
1     A
2     A
3     D
4     C
5     A
6     A
7     C
8     B
9     C
10   D
11   A
12   B
13   A
14   C
15   B
16   B
17   D
18   D
19   B
20   A
21   D
22   A
23   B
24   C
25   D
26   C
27   C
28   A
29   C
30   D
31   C
32   B
33   C
34   C
35   B
36   D
37   C
38   D
39   D
40   B



For More Questions:

Flipkart:
http://www.flipkart.com/advanced-test-strategy-istqb-foundation-questions-answers-included/p/itmdp9yzkgedxghz?pid=9781482812220

Amazon:
http://www.amazon.com/Advanced-Test-Strategy-Foundation--Questions-ebook/dp/B00FKS462K/


Friday, 20 December 2013

Test Management



            "... Project management throughout the development and implementation process was inadequate and at times ambiguous. A major systems integration project such as CAD
            Requires full time, professional, experienced project management. This was lacking...”

            "... The early decision to achieve full CAD implementation in one phase was misguided. In an implementation as far reaching as CAD it would have been preferable to implement in a step wise approach, proving each phase totally  before moving on to the next...”

Extract from the main conclusions of the official report into the failure of the London Ambulance Service's
                Computer Systems on October 26th and 27th 1992.



5.1 Overview

This module covers the overall management of the test effort for a particular project and attempts to answer several key questions such as:

How many testers do we need?

How shall the testers be organized?

What's the reporting structure and who is in charge?

How will we estimate the amount of testing effort required for this project?

 How do we keep versions of our test material in line with the development deliverables?

How do we ensure the test effort remains on             track?

How do we know that we have finished testing?

What is the process for logging and tracking             incidents?







5.2 Objectives

After completing this module you will:

Understand how testing might be organized.

Understand the different roles in a test team.

Understand the importance of test estimation, monitoring and control.

Appreciate the need for configuration management of the test assets.

Understand how and why incidents must be logged             and tracked.


5.3 Organization

"We trained hard ... but it seemed that every time we were beginning to form up into teams we would be reorganized... I was to learn later in life that we meet any new situation by reorganizing and a wonderful method it can be for creating the illusion of progress while producing confusion, inefficiency, -and demoralization."

A fundamental feature of our lives today is that nothing stays the same. Over time both internal and external pressures on the organizational structures (that we carefully put in place) must change and adapt if our business is to remain competitive. As development and testing organizations grow and evolve, a different structure is required to cope with the changing demands placed upon them. The approach adopted over time may look something like this:

Testing may be each individual developer's responsibility.
Testing is the development team's collective responsibility (either through buddy testing or assigning one person on the team to be the tester).
There is a dedicated independent test team (who do no development).
Internal test consultants 'centers of excellence' provide advice to projects.
A separate company does the testing - this is known as outsourcing.

An excellent description of how the test function can be organized within a company can be found in Ed Kit's book, Software Testing in The Real World [KIT95l



5.5 Configuration management (CM)

We all appreciate the need for testing and assuring quality in our development systems. But how many of us appreciate that Configuration Management is a precursor to these goals?

Configuration Management provides us with the balance to ensure that:

Systems are complete.
Systems are predictable in content.
Testing required is identified.
Change can be ring-fenced as complete.
An audit trail exits.

We've always practiced CM. Such activities an aggregating and releasing software to the production environment may, in the past, have been uncontrolled - but we did it.

Many famous organizations have found the need to develop a standard for CM that they have then since taken into the market place.

Configuration management encompasses much more that simply keeping a version control of your software and test assets, although that is a very good start. Configuration management is crucial to successful testing, especially regression testing because, in order to make repeatable, you must be able to recreate exactly the software and hardware environment that was used in the first instance.

Typical symptoms of poor CM might include:

Unable to match source and object code.
Unable to identify which version of a compiler generated the object code.
Unable to identify the source code changes made in a particular version of the software simultaneous changes are mad e to the same source code by multiple developers (and changes lost).










5.6 Definitions

ISO (International Standards Organization) definition of CM:

.

Configuration management (CM) provides a method to identify, build, move, control and recover any baseline in any part of the life cycle, and ensures that is secure and free from external corruption.


Configuration identification requires that all configuration items (CI) and their versions in test system are known.

Configuration control is maintenance of CI’s in a library and maintenance of records on how CI’s change over time.

Status accounting is the function of recording and tracking problem reports, change requests, etc.

Configuration auditing is the function to check on the contents of libraries, etc. for standards compliance, for instance.

CM can be very complicated in environments where mixed hardware and software platforms are being used, but sophisticated cross platform CM tools are increasingly available.



5. 7 Simple CM life cycle process

CM contains a large number of components. Each component has its own process and contributes to the overall process.

Let's take a look at a simple process that raises a change, manages it through the life cycle and finally executes an implementation to the production environment. Here we can see how to CM life cycle operates by equating actions with aspects of CM.

As a precursor activity we 'Evaluate Change'. All changes are evaluated before they enter the CM Life Cycle:

1.Raise Change Packet,
Uses Change Management functions to identify and register a change. Uses Change Control functions to determine that action is valid and authorized.

2.Add Configurable Item to Change Packet.
Select and assign configurable items to the Change packet. Execute Impact Analysis to determinate the items that also require some action as a result of the change and the order in which the actions take place.

3.Check-In.
Apply version CI back under CM control.

4.Create Executable.
Build an executable for every contained CI in the order indicated.

5.Sign-Off. .
Uses Change Control to verify that the auctioneer signaling that the auctioneer signaling that testing is complete for the environment in which the change is contained is authorized to do so, and that the action is valid.

6.Check-OK to Propagate.
Uses Change Control, Co Requisite to verify request to move a Change Packet through the life cycle is valid.

a) All precursor tacks completed successfully.
b) Next life cycle environment fit to receive.
c) Subsequent change in current environment has not invalidated Change Packet.

7. Propagation
Affect next environment population by releasing Change Packet and then distributing over a wider-area.

Note: You might notice that it is composed of a number of singular functions and series executed as a cycle. Of particular note is that 'Create Executable' is a singular function. This is because we should only ever build once if at all possible. This, primarily, saves time and computer resources. However, re-building an element in a new environment may negate testing carried out in proceeding one and can lead to a lengthy problem investigation phase.

5.8 What does CM control?

CM should control everything element that is a part of a system application.

Nominally, CM:

1. Configurable Items:
Maintains registration and position of all of our CI's. These may be grouped into logically complete change packets as a part of a development of maintenance exercise.

2. Defines Development Life Cycle:
It is composed of a series of transition points, each having its own Entry/Exit criteria and which maps to a specific test execution stage.

3. Movement:
Controls and Change Packet move and progresses it through the Life Cycle.

4. Environment:
Testing takes place in a physical environment configured specifically for the stage testing that the life cycle transition point and stage reflects.

5.9 How is it controlled?

CM is like every other project in that it requires a plan. It is particularly important that CM has a plan of what it is to provide as it forms a framework for life cycle management in which to work consistently and securely.

The CM plan cover what needs to be done, not by when, and defines three major areas:

The Processes will define or include:
Raising a change
Adding elements
Booking elements in and out Exit/Entry criteria
Life cycle definition
Life cycle progression
Impact analysis,
Ring-fencing change, release aggregation
Change controls.
Naming standards
Genetic processes for build and other activities

The Roles & responsibilities covering who and what can be done:
Configuration Manager & Librarian Project Manager, Operations Personnel Users, Developers and others as necessary

Records, providing necessary audit trail will include:
What is managed, the status of the life cycle position arid change status,
Who did what, where, when and under what authority. Also the success factor for activities.
Only once the CM plan and the processes that support it are defined can we consider automation.










5.10     What does CM look like?

CM has several hubs and functions that will make or break it. Hubs of system are defined as areas where information and source code are stored. Typically major hubs are central inventory and central repository. Surrounding those are four major tool sets that allow us to work on the data:

Version Management
Allows us access to any version or revision of a stored element.

Configuration Control Allows us to group elements into manageable sets.

Change Control & Management
This is global name given to processes that govern change through application development life cycle and stages it passes through from an idea through to implementation. It may include:

Change Control Panel or Board to assess and evaluate change;
Controls: Governing who can do what, when and under what circumstances.
Management: Carrying out an action or movement through the life cycle once
                                                the controls have been satisfied

Build & Release
Control is how our elements are built and manner in which our change is propagated through life cycle.

  The view is about as close to genetic global view of CM as you can get. It won't match all tools 100% as it covers all aspects of CM - and very few of the tools (although they might claim to) can do this.








Exercise

Configuration management -1

Make list of items that you think Test Manager should insist placed under configuration management control.



Exercise

Configuration management - 2

There are very many points to consider when implementing CM. We have summarized them into the following three categories:

CM Processes
The framework that specifies how our CM system is to operate and what it is to encompass.

Roles & Responsibilities
Who does what and at what time?

CM Records
The type of records we keep and the manner in which we keep and maintain them.

Quite a short list you might say. Using the information we have learned so far, try and construct a minimal Configuration Management Plan. Do not try and expand the processes required, but give hem suitable titles in an appropriate sequence.
Additionally, for every process you identify, try and match it to one or more segments of the CM Bubble diagram.








5.11     Test estimation, monitoring and control

Test estimation
Effort required to perform activities specified in high-level test plan must be calculated in advance. You must remember to allocate time for designing and writing the test scripts as well as estimating the test execution time. If you are going to use the test automation, there will be a steep learning curve for new people and you must allow for this as well. If your tests are going to run on multiple test environments add in extra time here too. Finally, you will never expect to complete all of the testing in one cycle, as there will be faults to fix and test will have to be re-run. Decide on how many test cycles you will require and try and estimate the amount of re­work (fault fixing and re-testing time).

Test monitoring
Many test efforts fail despite wonderful plans. One of the reasons might be that the test team was so engrossed in detailed testing effort (working long hours, finding many faults) that they did not have time to monitor progress. This however is vitally important if the project is to remain on track. (e.g. use a weekly status report).









Exercise

Try and list what you might think are useful measures for tracking test progress.

Test Manager will have specified some exit (or completion) criteria in the master test plan and will use the monitoring mechanism to help judge when the test effort should be concluded. The test manager may have to report on deviations from the project/test plans such as running out of time before completion criteria have been achieved.

Test control - in order to achieve the necessary test completion criteria it may be necessary to re-allocate resources, change the test schedule, increase or reduce test environments, employ more testers, etc.


5.12 Incident Management

An incident is any significant, unplanned event that occurs during testing that requires subsequent investigation and/or correction. Incidents are raised when expected and actual test results differ.

5.12.1 what is an incident?

You may now be thinking that incidents are simply another name for faults but his is not the case. We cannot determine at the time an incident has occurred whether there is really a fault in the software, whether environment was perhaps set up incorrectly or whether in fact test script was incorrect. Therefore we log incident and move on to the next test activity.

5.12.2 Incidents and the test process

An incident occurs whenever an error, query or problem arises during the test process. There must be procedures in place to ensure accurate capture of all incidents. Incident recording begins as soon as testing is introduced into system's development life cycle. First incidents that will be raised therefore are against documentation as project proceeds; incidents will be raised against database designs, and eventually program code of system under test.





5.12.3 Incident logging

Incidents should be logged when someone other than author of product under test performs testing. When describing incident, diplomacy is required to avoid unnecessary conflicts between different teams involved in testing process (e.g. developers and testers). Typically, information logged on an incident will include:


. Name of tester(s), data/time of incident, Software under test ID
. Expected and actual results
. Any error messages
. Test environment
. Summary description
. Detailed description including anything deemed relevant to reproducing/fixing potential fault (and continuing with work)
. Scope
. Test case reference
. Severity (e.g. showstopper, unacceptable, survivable, trivial)
. Priority (e.g. fix immediately, fix by release date, fix in next release)
 . Classification. Status (e.g. opened, fixed, inspected, retested, closed)
. Resolution code (what was done to fix fault)

Incidents must be graded to identify severity of incidents and improve quality of reporting information. Many companies use simple approach such as numeric scale of I to 4 or high, medium and low. Beizer has devised a list and weighting for faults as follows:


1
Mild
Poor alignment, spelling etc.
2
Moderate
Misleading information, redundant information
3
Annoying
Bills for 0.00, truncation of name fields etc.
4
Disturbing
Legitimate actions refused, sometimes it works, sometimes not
5
Serious
Loss of important material, system loses track of data, records etc.
6
Very serious
The mis-posting of transactions
7
Extreme
Frequent and widespread mis-postings
8
Intolerable
Long term errors from which it is difficult or impossible to recover
9
Catastrophic
Total system failure or out of control actions
la
Infectious
Other systems are being brought down



In practice, in the commercial world at least, this list is over the top and many companies use a simple approach such as numeric scale of 1 to 4 as outlined below:

1
Showstopper
Very serious fault and includes GPF, assertion failure or


complete system hang
2
Unacceptable
Serious fault where software does not meet business


requirements and there is no workaround
3
Survivable
Fault that has an easy workaround - may involve partial manual


operation
4
Cosmetic
Covers trivial faults like screen layouts, colors, alignments, etc



Note that incident priority is not the same as severity. Priority relates to how soon the fault will be fixed and is often classified as follows:

1. Fix immediately.
2.Fix before the software is released.
3.Fix in time for the following release.
4.No plan to fix.

It is quite possible to have a severity 1 priority 4 incident and vice versa although the majority of severity 1 and 2 faults are likely to be assigned a priority of 1 or 2 using the above scheme.

5.12.4 Tracking and analysis

Incidents should be tracked from inception through various stages to eventual close-out and resolution. There should be a central repository holding the details of all incidents.

For management information purposes it is important to record the history of each incident. There must be incident history logs raised at each stage whilst the incident is tracked through to resolution for trace ability and audit purposes. This will also allow ht formal documentation of the incidents (and the departments who own them) at a particular point in time.

Typically, entry and exit criteria take the form of the number of incidents outstanding by severity. For this reason it is imperative to have a corporate standard for the severity levels of incidents.

Incidents are often analyzed to monitor test process and to aid in test process improvement. It is often useful to look at sample of incidents and try to determine the root cause.

5.13     Standards for testing

There are now many standards for testing, classified as QA standards, industry-specific standards and testing standards. These are briefly explained in this section. QA standards simple specify that testing should be performed, while industry-specific standards specify what level of testing to perform. Testing standards specify how to perform testing.

Ideally testing standards should be referenced from the other two.

The following table gives some illustrative examples of what we mean:

Type
Standard
QA Standards
ISO 9000
Industry specific
Railway signaling
standard
standard
Testing Standards
BS 7925-1, BS 7925-2




























5.14 Summary.

In module five you have learnt that the Test Manager faces an extremely difficult challenge in managing the test team and estimating and controlling a particular test effort for a project. In particular you can now:

Suggest five different ways in which a test team might be organized.

Describe at least five different roles that a test team might have.

Explain why the number of test cycles and re-work costs are important factors in estimating.

Describe at least three ways that a test effort can be monitored.

 List three methods of controlling the test effort to achieve the necessary completion criteria.

Prioritize incidents.

Understand the importance of logging all incidents.

 Understand the need for tracking and analysis of incidents.


 For More Testing Techniques:







Flipkart:
http://www.flipkart.com/advanced-test-strategy-istqb-foundation-questions-answers-included/p/itmdp9yzkgedxghz?pid=9781482812220

Amazon:
http://www.amazon.com/Advanced-Test-Strategy-Foundation--Questions-ebook/dp/B00FKS462K/