Test Design Testing Techniques

 Test Design Testing Techniques :

  •  BAV (Boundary Value Analysis )
  •  ECP (Equivalence Class Partition)
  •  Decision Table Based Testing
  •  State Transition
  •  Error Guessing Testing

1. BAV (Boundary Value Analysis):

  •  The technique specify  to validate the boundary validation.
  •  It is based on testing at the boundaries.
  •  We will be testing both the valid and invalid input values in the BVA.

 Conditions are like :

              Min                       Max 

            Min-1                     Max-1

            Min+1                    Max+1

The above 6 conditions are enough.

BAV (Boundary Value Analysis ) Example:

Password field accept 6-32 character then we only test for :-

       Min : 6                Max  : 32

      Min-1 : 5             Max-1 : 31

      Min+1 : 7            Max+1 : 33

These 6 conditions are enough for password field testing.

2. ECP (Equivalence Class Partition):

  • The technique specify to test for valid and invalid combination.
  •  The idea behind this technique is to divide a set of test conditions into groups or sets that can be considered the same.
  •  It  is a black-box testing technique or specification-based testing technique in which we group the input data into logical partitions called equivalence classes.

ECP Example:

Any mail id field accepts alpha and numeric data :

        Valid Data                                      Invalid Data

           A – Z                                           All special character 

           a – z 

           0 – 9 

If a Text field accept 1000 -1500 the partition should be:

         Valid Data                                      Invalid Data 

         1000 – 1500                                   a – z 

                                                                   A – Z

                                                                  Numbers < 1000

                                                                  Numbers > 1500

                                  All special character

3. Decision Table Based Testing:

  • Decision table testing is a software testing technique used to test system behavior for different input combinations.
  • In this methodology the various input combinations and the accompanying system behavior (Output) are tabulated.
  • This is a systematic approach. 
  • Decision Table Testing is a Black Box test design technique (behavioral or behavior-based technique).
  • When a system has complex business rules, then the decision table testing technique helps in identifying the correct test cases.

Example:  How to Create a Login Screen Decision Base Table

Let's make a login screen with a decision table. A login screen with User id and Password Input boxes and submit button.

The condition is simple if the user provides correct username and password the user will be redirected to the homepage. If any of the input is wrong, an error message will be displayed.

T –  Make sure to use Correct username/password

F – Incorrect username/password

E – An error message is displayed

H – The home screen is displayed


 Case 1 – Username and password both were correct, and the user navigated to homepage

 Case 2 – Username was correct, but the password was wrong. The user is shown an error message.

 Case 3 – Username was wrong, but the password was correct. The user is shown an error message.

 Case 4 – Username and password both were wrong. The user is shown an error message.

While converting this to test case, we can create 2 scenarios :

First one :

Enter correct username and correct password and click on Submit button, and the expected result will be the user 

Should be navigated to homepage.

Second one from the below scenario: 

  • Enter Correct username and incorrect password and click on Submit button, and the expected result will be the user should get an error message.
  • Enter incorrect username and incorrect password and click on Submit button, and the expected result will be the user should get an error message.
  • Enter incorrect username and correct password and click on Submit button, and the expected result will be the user should get an error message.

As they essentially test the same rule.

4. State Transition:

  • State transition testing helps to analyze behavior of an application for different input conditions.
  • In this test outputs are triggered by changes to the input conditions or changes to 'state' of the system.
  • In other words, tests are designed to execute valid and invalid state transitions.
  • Testers can provide positive and negative input test values and record the system behavior. It is the model on which the system and the tests are based.
  • It is a black box testing which is used for real time systems with various states and transitions involved.

When to Use State Transition?

  • This can be used when a tester is testing the application for a finite set of input values.
  • When we have sequence of events that occur and associated conditions that apply to those events
  • This will allow the tester to test the application behavior for a sequence of input values. 

When to Not Rely On State Transition?

  • When the testing is not done for sequential input combinations.
  • If the testing is to be done for different functionalities like exploratory testing.


Let’s consider an Login page function where if the user enters the invalid password three times the account will be locked.

In this system, if the user enters a valid password in any of the first three attempts the user will be logged in successfully. If the user enters the invalid password in the first or second try, the user will be asked to re-enter the password. And finally, if the user enters incorrect password 3rd time, the account will be blocked.

In the table when the user enters the correct PIN, state is transitioned to S5 which is Access granted. And if the user enters a wrong password he is moved to next state. If he does the same 3rd time, he will reach the account blocked state.

5. Error Guessing Testing:

  • Testing is conducted by performing invalid operations and validate the error massage is displaying or not .
  • The Error massage should meaningful to understand.
  • It is a type of testing method in which prior experience in testing is used to uncover the defects in software. It is an experience based test technique in which the tester uses his/her past experience or intuition to gauge the problematic areas of a software application.


  • We need to test a program which read’s a file. What happen if the program gets a file which is empty OR  The file does not exist??? 
  • Enter blank space into text fields
  • Use max limits of files to be uploaded

Please watch below video for more details:

All you need to know about test cases

 Test Case Design 

  • Test case design refers to how you set-up your test cases. It is important that your tests are designed well, or you could fail to identify bugs and defects in your software during testing.
  • It is responsibility of a tester to design the test cases.
  • To design the test cases tester should :
  • Understand the requirement 
  • Identify the scenario
  • Design the test case 
  • Review the test case 
  • Update the test case based on review 

How to create good test cases 

  • Test Cases need to be simple steps, transparent and easy to understand.
  • Each and every test case should be traceable and it should be linked to “Requirement ID”.
  • Test cases should have only necessary and valid steps(Brief And Short) .
  • Implement Testing Techniques – Positive And Negative Test Cases
  • Test cases should be written in such a way that it should be easy to maintain.
  • Test cases should cover the usability aspects in terms of ease of use.
  • Different set of test cases should be prepared for the basic performance testing like multi-user operations and capacity testing.
  • Test cases for Security aspects should be cover for example user permission, session management, logging methods.

Test cases Design Format:

Follow below steps to create Test cases 

Test Case Id : It is a unique number to represent each test cases. It’s good practice to follow some naming convention for better understanding and discrimination purposes. Like   Tc_proj_module_number(001)

For example: Tc_Yahoo_Inbox_001

Test Scenario :  A Test Scenario is defined as any functionality that can be tested. It is a collective set of test cases which helps the testing team to determine the positive and negative characteristics of the project. Test Scenarios are derived from test documents such as BRS and SRS.

Test Cases : A Test Case is a set of actions executed to verify a particular feature or functionality of your application. Test Cases are focused on what to test and how to test. Test Case is mostly derived from test scenarios.

Precondition :  The condition to be satisfied to test the requirement. Conditions that need to meet before executing the test case.

Priority :  The importance of the test case is called priority. It is a parameter to decide the order in which defects should be fixed. Priority means how fast defect has to be fixed.

Test Steps :   Test Steps describe the execution steps. To execute test cases, you need to perform some actions. Each step is marked pass or fail based on the comparison result between the expected and actual outcome.

Test Data:  It is data created or selected to satisfy the execution preconditions and inputs to execute one or more test cases. We need proper test data to execute the test steps.

Expected result :  The output result as per customer requirement (As per SRS or FRS). The result which we expect once the test cases were executed. 

Post Condition :  Conditions that need to achieve when the test case was successfully executed.

Actual Result :  The output result in the application once the test case was executed. Capture the result after the execution. Based on this result and the expected result, we set the status of the test case.

Status :  The status as Pass or Fail based on the expected result against the actual result. If the actual and expected results are the same, mention it as Passed. Else make it as Failed. If a test fails, it has to go through the bug life cycle to be fixed.

Other important fields of a test case template:

Project Name: Name of the project the test cases belong to.

Module Name: Name of the module the test cases belong to.

Reference Document: Mention the path of the reference documents (if any such as Requirement Document, Test Plan , Test Scenarios, etc.,)

Author (Created By) : Name of the Tester who created the test cases.

Date of Creation: When the test cases were created.

Date of Review: When the test cases were reviewed.

Reviewed By: Name of the Tester who created the test cases.

Executed By: Name of the Tester who executed the test case.

Date of Execution: When the test case was executed.

Comments: Include value information which helps the team.

Please refer below video on test cases:

Please refer below video on real time test case template:

Test Strategy Vs Test Plan

Test Strategy:

  • A test strategy is an outline that describes the testing approach of the software development cycle. 
  • Test strategy is a set of guidelines that explains test design and determines how testing needs to be done.
  • The creation and documentation of a test strategy should be done in a systematic way to ensure that all objectives are fully covered and understood by all.
  • It should also frequently be reviewed, challenged and updated as the organization and the product evolve over time.

Test Strategy Vs Test Plan:

Test Plan

Test Strategy

A test plan for software project can be defined as a document that defines the scope, objective, approach and emphasis on a software testing effort.

Test strategy is a set of guidelines that explains test design and determines how testing needs to be done.


Test plan include- Introduction, features to be tested, test techniques, testing tasks, features pass or fail criteria, test deliverables, responsibilities, and schedule, etc.

Test strategy includes- objectives and scope, documentation formats, test processes, team reporting structure, client communication strategy, etc.

Test plan is carried out by a testing manager or lead that describes how to test, when to test, who will test and what to test

A test strategy is carried out by the project manager. It says what type of technique to follow and which module to test

Test plan narrates about the specification.

Test strategy narrates about the general approaches.

It is done to identify possible inconsistencies in the final product and mitigate them through the testing process.

It is a plan of action of the testing process on a long-term basis.


It is used on a project level only.

It is used on an organizational level.

Test plan can change.

It is used by multiple projects and can be repeated a lot of times.

It is used by one project only and is very rarely repeated.

Test strategy cannot be changed.


Test planning is done to determine possible issues and dependencies in order to identify the risks.

It is a long-term plan of action. You can abstract information that is not project specific and put it into test approach

Please refer below video more details:

What is Test Planning and How to Create Test Plan?

Test Planning:

  • Initially when we get the project it is responsibility for a test lead or senior tester to prepare the test plan for the project.
  • A test plan is the foundation of every testing effort.
  • A Test Plan refers to a detailed document that catalogs the test strategy, objectives, schedule, estimations, deadlines, and the resources required for completing that particular project. 
  • A good test plan clearly defines the testing scope and its boundaries. You can use requirements specifications document to identify what is included in the scope and what is excluded.
  • By creating a clear test plan all team members can follow, everyone can work together effectively.

The steps to prepare the test plan :-

  •      Criteria
  •      Prepare test plan 
  •      Get approval

1.  Criteria:-

  • To prepare test plan the test lead /senior tester should have the below details:
  • Development schedule 
  • Project release date
  • Services to be provided to customer 
  • Understand the project requirement
  • Understanding of the complexity of the project
  • Scop of testing
  • No. of resources required
  • Type of testing to be conducted 

2. Prepare Test Plan :

  • A Test Plan is a detailed document that describes the test strategy, objectives, schedule, estimation, deliverables, and resources required to perform testing for a software product.
  • The test plan serves as a blueprint to conduct software testing activities as a defined process, which is minutely monitored and controlled by the test manager.

Test Plan document has the below details :

  • Introduction
  • Features to be tested ( In scope)
  • Features not to be tested (Out of scope)
  • Pass/ Fail criteria
  • Entry and Exit criteria
  • Approach
  • Schedule 
  • Test factor
  • Recourses
  • Training plans
  • Configuration management
  • Test Deliverable
  • Test Environment
  • Risk and Mitigation Plan
  • Approval 
3. Prepare Test Plan : Once test plan is ready it should be approved from test manager

Lets talk about different sections on a test plan in detail:

Introduction :-  Specify the purpose / Requirement of the project.

Feature to be tested :- Specify the requirement which is need to be tested.

Feature not to be Tested :- Specify the requirement which is out of the scope of testing. We dose not need to test.

PASS / FAIL Criteria (Suspension Criteria):-  If the below criteria is satisfied we can suspend the build from testing:

  • The build is not deployed 
  • The sanity test is fail
  • The build is unstable 
  • Wrong build is given 

(Note: If the build is suspended the developer have to immediately resolve the issue and give the modified build)

Entry and Exit Criteria:-  Specify the condition to start and stop the testing . for every level of testing there is entry and exit criteria 

    For example :-

System testing Entry Criteria :-

  • Integration testing should be completed 
  • All the major defect found in the integration test should be fixed 
  • The test environment should be ready , Required access should be provided
  • The Development should be completed (All Coding part)

System testing Exit Criteria :-

  • All the functionalities are tested 
  • No new defect are found
  • No risk in the project 
  • All the defects are fixed 
  • The testing scheduled are completed 

Approach :-  Specify the process followed in the project like

 Planning   Design   Execution  Reporting

Schedules:-  Specify the testing . The testing scheduled are prepared based on development schedules. 

Resources :-  Specify the list of team, resources and there responsibilities like testing team, development team, project managers etc. 

Training plans :- Specify any training plan is required for the resources.

Configuration Management :-  Specify hoe to manage all the project related documents , codes, testing documents etc.so these documents are accessible to all the team members.

There are tools are available like : 

VSS  Visual source safe

CVS   Concurrent version system for configuration management.

The tool provide security, version control (Each time the documents are updated a new version is created )

Test Deliverables :-  Specify the list of documents to be submitted to the customer during testing or after testing Documents are like :

  • Test Plan
  • Test Case documents 
  • Test execution result
  • Defect report 
  • Review report
  • Test summary report etc.

Test Environment :-  Specify the Hard ware configuration details, the additional software required in the test environment to access the application and conduct the testing.

Risk And Mitigation Plan:- Specify the challenges in the project and the solution to resolve the challenges.

The different risks are : 

  • Required documents are not available 
  • Estimations are not correct
  • Delay in project deliverables
  • Lack of skilled resources 
  • Lack of team coordination's ETC.

Approval :- After preparation of the plan the project manager and customer have to review and approve the plan.

Note :- During the planning phase it is the responsibility of test lead to recruit the tester based on the skills required for the project .

Please refer the below video for more details on test plan:

What is Software Testing Life Cycle?

The Software Testing Life Cycle (STLC) is a term that refers to the process of testing software. The Software Testing Life Cycle (STLC) is a set of actions that are carried out during the testing process to guarantee that software quality objectives are satisfied.

Any Project has to follow the below phases from beginning of a project to end of the project in Testing :

  • Requirement Analysis
  • Test Planning 
  • Test Case Design 
  • Test Execution
  • Defect Reporting And Tracking
  • Test Closer 

1. Requirement Analysis :- It is tester’s responsibility

  • Study the requirements
  • Identify types of tests to be performed
  • Prepare RTM
  • Automation feasibility analysis

2. Test Planning :- It is test lead’s responsibility

  • Understand the project
  • Scop of testing
  • Identify the resource
  • Schedules
  • Deliverables
  • Approach
  • Effort Estimations

3. Test Case Design :- It is Tester’s Responsibility

  • Design/Script the Test cases 
  • Review testcases
  • Update test cases based on review and baseline test cases
  • Create test data 

4. Test Execution :- It is tester’s responsibility

  • Setup test environment
  • Execute tests as per plan
  • Document test results

5. Defect Reporting and Tracking:- It is Tester’s Responsibility

  • Report defect to developers 
  • Map defects to test cases in RTM
  • Re-test the defect fixes
  • Track the status of defect to closure

6. Test Closer :- It is tester and Test Lead’s responsibility

Stop testing based on :-

  • All functionalities are tested 
  • No new defect are found
  • Schedule are conducted 
  • No risk in the project
  • Test Closure Report

Please watch below video on STLC process:

Monkey and Gorilla Testing

Monkey Testing

  • Monkey testing is a type of software testing in which a software or application is tested using random inputs with the sole purpose of trying and breaking the system. 
  • Monkey testing tests the whole system, There are no rules in this type of testing.
  • Monkey testing is usually implemented as random, automated unit tests.
  • Monkey testing is also known as Random testing, Fuzz Testing or Stochastic Testing.
  • Monkey testing is a crucial testing method that is done to authenticate the functionality of the product application. 
  • Monkey testing is a kind of black box testing.

Gorilla Testing

  • Gorilla testing is a software testing technique that repeatedly applies inputs on a module to ensure it is functioning correctly and that there are no bugs. 
  • Gorilla testing is a manual testing procedure and is performed on selected modules of the software system with selected test cases.
  • Gorilla testing is not random and is concerned with finding faults in a module of the system.
  • Gorilla Testing is also known as Torture Testing, Fault Tolerance Testing or Frustrating Testing.
  • Gorilla Testing is a manual testing and it is repetitively performed.
  • Gorilla testing is a kind of white box testing.

Difference between monkey and gorilla testing:



Monkey testing is a type of software testing which is performed based on some random inputs without any test cases and checks the behaviour of the system and confirms whether it crashes or not.

Gorilla Testing is a type of software testing which is performed on a module based on some random inputs repeatedly and checks the module’s functionalities and confirms no bugs in that module.

In Monkey Testing, no test case is used to test the application as it is a part of random testing.

It is performed repeatedly as it is a part of manual testing.

The Monkey Testing approach is primarily used in System Testing.

The Gorilla Testing approach is mainly used in Unit Testing.

Monkey testing is implemented on a whole system.

Gorilla testing is implemented on few selective components of the system.

No software knowledge is required in order to execute the monkey testing.

It requires minimum software knowledge in order to execute the gorilla testing.

The main objective of Monkey Testing is to check whether system crashes or not.

The main objective of Gorilla testing is to check whether the module is working properly or not.

Monkey testing is also known as Random testing, Fuzz Testing or Stochastic Testing.

Gorilla Testing is also known as Torture Testing, Fault Tolerance Testing or Frustrating Testing.

There are three types of Monkey Testing i.e. Dumb Monkey Testing, Smart Monkey Testing and Brilliant Monkey Testing.

While there is no such different types of Gorilla Testing available.

The implementation of Monkey testing does not require any planning or preparation.

The Gorilla testing cannot implement without any preparation or planning.

Please refer below video on monkey and gorilla testing:

Complete Software Testing Hierarchy | Software Testing Levels |Different Testing Types with Details

Complete Software Testing Hierarchy

Software Testing Hierarchy
Please refer the below video on Complete Software Testing Hierarchy:

Retesting and Regression Testing


  • Testing the same functionality with different combination of input data is called retesting.
  • Retesting essentially means to test something again.
  • In simple words, Retesting is testing a specific bug after it was fixed. 
  • Retesting ensures that the issue has been fixed and is working as expected.
  • It is a planned testing with proper steps of verification
  • In some cases the entire module is required to be re-tested to ensure the quality of the module.

Why and when we perform Retesting

  • Retesting is used when there is any specific error or bug which needs to be verified.
  • It is used when the bug is rejected by the developers then the testing department tests whether that bug is actual or not.
  • It is also used to check the whole system to verify the final functionality.
  • It is used to test even the entire module or component in order to confirm the expected functionality.


  • Retesting ensures that the issue has been fixed and is working as expected.
  • It improves the quality of the application or product
  • It requires less time for verification because it’s limited to the specific issue or any particular feature.
  • If the tester has knowledge of the source code, it becomes very easy to find out which type of data can help in testing the application effectively.


  • It requires new build for verification of the defect.
  • Once the testing is started then only the test cases of retesting can be obtained and not before that.
  • The test cases for re-testing cannot be automated.

Regression Testing

  • If any changes are done in the existing build this test is conduct on the modified build to verify the changes are working correctly or not and because of this changes there are no side effect.
  • In Regression Test the change functionality + dependent functionality are tested .
  • The purpose of the regression testing is to find the bugs which may get introduced accidentally because of the new changes or modification.
  • This also ensures that the bugs found earlier are NOT creatable.
  • This helps in maintaining the quality of the product along with the new changes in the application.

When To use Regression Testing

  • Any new feature is added to an existing feature.
  • The codebase is fixed to solve defects.
  • Any bug is fixed
  • Changes in configuration.


  • It helps the team to identify the defects and eliminate them earlier in the software development life cycle.
  • It ensures continuity of business functions with any rapid change in the software.
  • Regression testing can be done by using the automation tools.
  • It helps us to make sure that any changes like bug fixes or any enhancements to the module or application have not impacted the existing tested code.


  • If regression testing is done without using automated tools then it can be very tedious and time consuming because here we execute the same set of test cases again and again.
  • Regression testing has to be performed for every small change in the code as even a small portion of code can create issues in the software.
  • It takes time to complete the tests and this slows down the agile velocity.
  • It's expensive and the cost is hard to justify. 

Re-testing vs Regression Testing:


Regression Testing

Retesting is about fixing specific defects that you've already found.

Regression testing is about searching for defects.

Retesting is done only for failed test cases.

Regression testing is performed for passed test cases.

Retesting is used to ensure the test cases which failed in last execution are fixed.

Regression testing is to ensure that changes have not affected the unchanged part of product. 

Verification of bugs are included in the retesting.

Verification of bugs are not included in the regression testing.

Retesting is of high priority so it’s done before the regression testing.

Regression testing can be done in parallel with retesting.

Retesting the test cases cannot be automated.

Regression testing test cases can be automated.

In case of retesting the testing is done in a planned way.

In case of regression testing the testing style is generic.

Test cases of retesting can be obtained only when the testing starts.

Test cases of regression testing can be obtained from the specification documents and bug reports

Please refer the below video on complete video on Retesting and Regression Testing:

What is Adhoc Testing? Difference between Adhoc testing and Exploratory Testing

 Adhoc Testing

  • When a software testing performed without proper planning and documentation, it is said to be Adhoc Testing.  
  • Ad hoc Testing is an informal or unstructured software testing type that aims to break the testing process in order to find possible defects.
  • Ad hoc testing is performed without a plan of action and any actions taken are not typically documented.
  • Ad Hoc Testing implies learning of the software before its testing.
  • Ad hoc Testing is done by executing the random scenarios and it is a form of negative testing which ensures the perfection of the testing.

Why do we do adhoc testing?

  • The main aim of ad hoc testing is to find any defects through random checking.
  • This can uncover very specific and interesting defects, which are easily missed when using other methods.
  • Ad hoc testing can be performed when there is limited time to do exhaustive testing and usually performed after the formal test execution. 
  • Ad hoc testing will be effective only if the tester has in-depth understanding about the System Under Test.


  • This method is very simple and can be performed without any training.
  • It can be used when time period is limited. 
  • This can uncover very specific and interesting defects, which are easily missed when using other methods.
  • This testing can be performed any time during Software Development Life Cycle Process (SDLC)


  • This method is not recommended, when more scientific methods are available.
  • The actual testing process is not documented since it does not follow a particular test case. 
  • It is difficult for the tests to regenerate an error in ad hoc testing.

Adhoc testing Vs Exploratory Testing

Adhoc Testing

Exploratory Testing

Ad Hoc Testing implies learning of the software before its testing.

Exploratory Testing, you learn and test the software simultaneously.

Documentation is not a basic need of this type of testing. The QA team always attends the testing without specific documentation.

Documentation is mandatory in Exploratory Testing. To assure the quality it’s necessary to documents the detail of the testing.

Ad hoc is about the perfection of the testing.

Exploratory Testing is more about the learning of the application.

Ad hoc Testing helps to find innovative ideas from the research.

It helps to develop the application.

Ad hoc is a technique of testing an application; this provides a significant role in the software Production.

This is an approach of testing that combines the learning test results and creates a new solution.


It mostly works on the business concerns and increases the knowledge about the application.


It categorizes the problems and compare them from the problems found in past. This helps to reduce the time consumption.

Adhoc testing is not important to execute by an expert software testing engineer.

This always needed to be done by expert.

It works on negative testing mostly.

This testing works on positive testing niche.

Please refer the below video on complete video on Adhoc testing:

What is Exploratory Testing?

Exploratory Testing?

  • Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. 
  • Exploratory testing allows you to think outside the box and come up with scenarios that might not be covered in a test case.
  • In simple word it is a type of software testing where Test cases are not created in advance but testers check system on the fly. 
  • It focuses on discovery and relies on the guidance of the individual tester to uncover defects that are not easily covered in the scope of other tests.
  • It doesn't restrict the tester to a predefined set of instructions.
  • This is a test approach that can be applied to any test technique, at any stage in the development process.

When should we do Exploratory Testing?

  • In many software cycles, an early iteration is required when teams don’t have much time to structure the tests. Exploratory testing is quite helpful in this scenario.
  • When testing mission-critical applications, exploratory testing ensures you don’t miss edge cases that lead to critical quality failures.
  • It is especially useful to find new test scenarios to enhance the test coverage.
  • It helps review the quality of a product from a user perspective. 

Advantages of Exploratory testing

  • It requires limited preparation, which allows you to save time and quickly jump to execution.
  • In Exploratory Testing, you can generate your own test scenarios on the fly, which will allow you to dive deeper into the functional aspects of the product.
  • This test is much less time-consuming.
  • It allows you to think outside the box and come up with scenarios that might not be covered in a test case.
  • This allows the tester to find defects that are beyond the scope of the listed scenarios.
  • This is an approach that is most useful when there are no or poor specifications and when time is severely limited.

Disadvantages of Exploratory testing

  • In exploratory testing, once testing is performed it is not reviewed.
  • Difficult to document each procedure.
  • It is difficult to reproduce the failure.
  • It is not easy to say which tests were already performed.
  • Limited by testers’ skills set.
  • Reporting is difficult without planned scripts.

Please refer the below video on complete video on Exploratory Testing:

What is Positive and Negative Testing?

Positive Testing

  • Positive Testing is a type of testing which is performed on a software application by providing the valid data as an input.
  • Positive testing is a type of software testing that is performed by assuming everything will be as expected.
  • It is performed with the assumption that only valid and relevant things will occur.
  • In this type, testing performed within the boundaries and this testing checks that the product /application is behaving as per the specification document with a valid set of test data.

Negative Testing

  • Negative testing is a method of testing an application or system that ensures that the plot of the application is according to the requirements and can handle the unwanted input and user behavior. 
  • It is also known as error path testing or failure. And it helps us to identify more bugs and enhance the quality of the software application under test .
  • Negative testing uses invalid input data, or undesired user behaviors, to check for unexpected system errors.
  • We can say that the negative testing is executing by keeping the negative point of view in simple terms.
Difference between positive testing and negative testing:

Positive Testing

Negative Testing

Positive Testing means testing the application or system with valid data.

Negative Testing means testing the application or system with invalid data.

It is always done to verify the known set of test conditions.

It is always done to break the project or product with unknown set of test conditions.

It ensures software is normal.

It ensures 100% defect free software.

It doesn’t cover all possible cases.

It covers all possible cases.

It can be performed by people having less knowledge.

It can be performed by professionals.

Positive testing is implemented only for the expected conditions.

Negative testing is implemented only for unexpected conditions.

It is less important as compare to negative testing.

It is more important as compare to positive testing.

Positive testing can be implemented on every application.

Negative testing can be implemented when the possibilities of unpredicted conditions.

Please refer the below video on complete video on Positive and Negative Testing: