Software Testing Definitions
Explore our comprehensive list of software testing related definitions to enhance your understanding.
A/B Testing: A method of comparing two versions of a webpage or app against each other to determine which one performs better.
Acceptance Testing: Testing conducted to determine if a system satisfies its acceptance criteria and whether the customer should accept the system.
Accessibility Testing: Accessibility testing is a type of software testing performed to ensure that a web or mobile application is usable by people with disabilities such as visual, auditory, cognitive, and motor impairments.
Ad hoc Testing: Informal testing without a specific plan or documentation. Often relies on the tester's intuition and experience.
Agile Testing: A testing practice that follows the principles of agile software development, with a focus on continuous testing and feedback.
Alpha Testing: Testing performed by potential users or an independent test team at the developer’s site.
Automated Testing: The use of software tools to execute pre-scripted tests on a software application before it is released into production.
Basis Path Testing: A white-box testing technique based on the control structure of the procedural design to define a basis set of execution paths.
Behavior-Driven Development (BDD): A development approach that includes writing tests in a language that non-technical stakeholders can understand.
Beta Testing: A phase of testing where a product is released to a limited audience outside of the core development team to uncover any bugs or issues before the final release.
Big Bang Integration Testing: An approach where all components or modules are integrated simultaneously, and the system is tested as a whole.
Black Box Testing: Testing that focuses on the inputs and outputs of the software system without considering how the internal code works.
Boundary Testing: Testing that focuses on the values at the boundaries of input domains.
Boundary Value Analysis (BVA): A test design technique to identify errors at the boundaries rather than within the ranges.
Bug: An error, flaw, or fault in a software program that causes it to produce an incorrect or unexpected result.
Cause-Effect Graphing: A technique used to derive test cases by identifying input conditions (causes) and their effects.
Chaos Engineering: The discipline of experimenting on a software system in production to build confidence in the system's capability to withstand turbulent and unexpected conditions.
Checkpoint: A predetermined point in the software development process where certain deliverables must be reviewed and approved.
Code Coverage: A measure used in software testing to describe the degree to which the source code is tested by a particular test suite.
Compatibility Testing: Testing to ensure software can run on different hardware, operating systems, applications, network environments, or mobile devices.
Component Testing: Testing individual components of the software in isolation.
Condition Coverage: Condition coverage is a software testing metric used to ensure that each individual condition in a decision statement has been evaluated to both true and false at least once during testing.
Continuous Integration (CI): A development practice where developers integrate code into a shared repository frequently, which is then automatically tested.
Cross-Browser Testing: Testing the software application across multiple web browsers to ensure consistent behavior and appearance.
Data-Driven Testing (DDT): A testing methodology in which test data is read from a data file (e.g., a spreadsheet, XML file, database) and used to run test scripts.
Decision Coverage: A metric used in white-box testing to ensure that each decision point in the code has been executed and evaluated to both true and false.
Defect Density: A metric used to determine the number of confirmed defects detected in a software component or system divided by the size of the component or system.
Defect: A deviation from the expected result or behavior in software.
End-to-End Testing: Testing that verifies the complete system flow from start to finish to ensure all components work together as expected.
Endurance Testing: Endurance testing is a type of performance testing that evaluates how a software application performs over an extended period of time to ensure it can handle prolonged use without degradation or failure.
Equivalence Partitioning: A test case design technique that divides the input data into partitions of equivalent data from which test cases can be derived.
Error Guessing: A test case design technique where experienced testers use their intuition and experience to guess the problematic areas of the application.
Expected Results: Expected results are the specific outcomes that are predicted to occur when a particular test case is executed.
Exploratory Testing: An approach to testing where testers actively explore the application without a predefined plan, often simultaneously learning and creating test cases.
Failure Mode: Failure mode refers to the specific manner or way in which a software application or system can fail or produce incorrect results. It describes the symptoms and conditions under which a failure occurs, helping to identify potential weaknesses and improve the reliability of the system.
Failure Modes and Effects Analysis: FMEA is a systematic method for identifying and evaluating potential failure modes within a system, product, or process, and analyzing their effects. It involves assessing the severity, occurrence, and detection of each potential failure to prioritize risks and implement corrective actions to enhance reliability and prevent failures.
Fault Tolerance: Fault tolerance is the ability of a software system or application to continue operating correctly even when one or more of its components fail. It involves designing the system in such a way that it can detect, isolate, and manage faults without interrupting its overall functionality.
Feature: A feature is a distinct and specific functionality or characteristic of a software application that provides value to the user. Features are typically described in terms of user requirements and are implemented to fulfill particular needs or perform specific tasks within the software.
Functional Testing: Testing the functions of a system or component against the specified requirements.
Functional Requirement: A functional requirement is a specification of a system's behavior or functions that outlines what the system should do. It describes the interactions between the system and its users, as well as other systems, and includes detailed descriptions of inputs, outputs, data processing, and any other actions the system must perform to fulfill user needs and business objectives.
Fuzz Testing (Fuzzing): A technique used to discover coding errors and security loopholes by inputting massive amounts of random data, called fuzz, to the system in an attempt to make it crash.
Gamma Testing: The final testing stage before the software is released, often done by a small group of external users or customers who are not part of the development team.
Gherkin: A business-readable, domain-specific language used in behavior-driven development (BDD) to define test cases
Glass Box Testing: Another term for white-box testing, where the internal structure, design, and implementation of the item being tested are known to the tester.
Gorilla Testing: A form of software testing where one module or functionality is tested thoroughly and heavily, often repeatedly, to ensure its reliability and stability.
Gray Box Testing: A testing technique that combines both black-box and white-box testing methods. Testers have some knowledge of the internal workings of the software but do not have full access to its source code.
GUI Testing (Graphical User Interface Testing): The process of testing a product's graphical user interface to ensure it meets its specifications. This involves checking the functionality, usability, and visual aspects of the interface.
Heuristic Evaluation: A usability inspection method for computer software that helps to identify usability problems in the user interface design.
Hybrid Testing: Combining multiple testing strategies and techniques to improve the effectiveness and efficiency of the testing process. This can involve combining automated and manual testing, or integrating different types of testing such as functional, performance, and security testing.
Incident: Any event occurring during the testing process that requires investigation. An incident could be a deviation from expected results, an unexpected event, or any issue that could impact the testing process.
Incident Report: Documentation of any event that occurs during the testing process that needs investigation. It details the nature of the incident, steps to reproduce it, and its impact.
Incremental Testing: A testing approach where individual components or systems are integrated and tested one by one until the entire system is tested.
Independence of Testing: The separation of responsibilities to avoid conflicts of interest and ensure objective testing. This can involve having testing activities performed by a different team or individuals than those who developed the software.
Informal Review: A type of review with no formal process or documentation requirements. It typically involves a simple meeting or casual discussion among team members to evaluate the quality of work products.
Inspection: A formal review process that involves a detailed examination of a work product by a team of qualified individuals. The goal is to identify defects, compliance issues, and improvements.
Integration Testing: Testing performed to expose defects in the interfaces and interactions between integrated components or systems. This can be done incrementally as components are integrated or all at once after all components are integrated.
Interoperability Testing: Testing conducted to ensure that a system or component can interact with other systems or components as expected. This includes verifying data exchange, functionality, and compatibility.
JUnit: A widely-used open-source testing framework for Java programming language, which is used to write and run repeatable automated tests.
Just-In-Time Testing: A testing approach where test activities are planned and executed just before they are needed, rather than being planned well in advance. This can help ensure that the most current information and code are being tested.
Keyword-Driven Testing: A testing methodology in which test scripts are developed based on keywords related to the actions to be performed. Each keyword corresponds to a specific operation or function, making test scripts easier to read and maintain.
Known Defect: A defect that has been identified, documented, and acknowledged by the development or testing team, but may not yet be resolved or fixed.
KPI (Key Performance Indicator): Metrics used to measure the effectiveness and success of testing activities and processes. KPIs can include defect detection rates, test coverage, test execution times, and more.
Latent Defect: A defect that exists in the system but has not yet caused a failure because the exact conditions for triggering the defect have not been met.
Load Testing: A type of performance testing to evaluate how a system behaves under a specific load, typically the expected number of concurrent users or transactions. The goal is to identify performance bottlenecks and ensure the system can handle the expected load.
Localization Testing: Testing to ensure that the software behaves correctly in a specific locale or region, including language translation, cultural context, date and time formats, currency, and other regional settings.
Logical Test Case: A test case that includes high-level test scenarios and conditions without specifying the exact input data and expected results. It focuses on the logic and flow of the test rather than detailed specifics.
Maintainability Testing: The process of testing to determine how easily a software system or component can be modified to correct faults, improve performance, or adapt to a changed environment.
Manual Testing: The process of manually executing test cases without the use of automated tools. Testers perform the tests using the application to identify defects and ensure that the software behaves as expected.
Master Test Plan: A high-level test plan that unifies and summarizes all individual test plans. It outlines the overall test strategy, objectives, resources, schedule, and scope for the entire testing effort.
Maturity Model: A framework that describes the stages of maturity through which processes and practices evolve from initial, ad hoc practices to optimized, well-managed processes. Examples include the Capability Maturity Model Integration (CMMI) and the Testing Maturity Model Integration (TMMi).
Metric: A standard of measurement used to quantify various attributes in software testing, such as test coverage, defect density, and test execution progress.
Model-Based Testing (MBT): A testing approach where test cases are derived from models that describe the functional aspects of the system. These models can be state machines, decision tables, or other representations.
Monkey Testing: A type of random testing where the tester inputs random data into the system to check for unexpected behavior or crashes. It is often used to test the robustness of the system.
Mutation Testing: A method of software testing where the program is modified in small ways (mutants) to check if the existing test cases can detect these changes. It helps to evaluate the quality and effectiveness of the test cases.
Multi-Condition Coverage: A white-box testing technique that ensures all possible combinations of conditions in a decision are tested at least once.
Mock Object: A simulated object that mimics the behavior of real objects in controlled ways, used in unit testing to isolate the behavior of the system under test.
Negative Testing: A type of testing that aims to ensure that the software behaves as expected when given invalid or unexpected inputs. It helps to identify how the system handles error conditions.
Non-Functional Requirement (NFR): Requirements that define the quality attributes, performance criteria, and constraints of a software system. Examples include usability, reliability, performance, and security.
Non-Functional Testing: Testing that focuses on the non-functional aspects of a software application, such as performance, usability, reliability, and security. It ensures that the software meets its non-functional requirements.
N+1 Testing: A form of testing that involves testing the application with one additional user, process, or load beyond its expected capacity to identify how it performs under slightly increased stress.
Node Coverage: A white-box testing technique that ensures every node (or statement) in the program's control flow graph is executed at least once during testing.
Normalization: In the context of test data management, normalization refers to the process of organizing data to reduce redundancy and improve data integrity. This can be relevant in database testing.
Non-Intrusive Testing: Testing that does not interfere with the normal operation of the system. It aims to observe and measure the system's behavior without affecting its performance or state.
NUnit: A unit testing framework for .NET applications, similar to JUnit for Java. It is used to create and run automated tests for .NET code.
Operational Acceptance Testing (OAT): Testing performed to verify that the software meets operational requirements, such as reliability, maintainability, and supportability, ensuring it can be used effectively in production.
Orthogonal Array Testing: A systematic, statistical way of testing that uses orthogonal arrays to create a minimal set of test cases that cover all possible pairwise combinations of input parameters. This helps in reducing the number of test cases while ensuring comprehensive coverage.
Output Domain: The range of possible outputs that a system or component can produce. Testing the output domain ensures that all potential outputs are correct and within specified limits.
Oracle: A mechanism used to determine whether the software's output is correct or not. Test oracles can include requirements documents, design specifications, or other reference documents.
Off-The-Shelf Software: Commercially available software that is ready to use and does not require custom development. Testing off-the-shelf software involves validating that it meets the specified requirements and works correctly in the intended environment.
Operational Profile: A statistical representation of how a system will be used in production, including the frequency and distribution of different types of inputs and operations. This helps in designing realistic test scenarios.
Optimization Testing: Testing aimed at improving the performance, efficiency, or resource usage of a software application. This can involve code optimization, load balancing, or other techniques to enhance the system's performance.
Open Source Testing Tools: Testing tools that are available under an open-source license, allowing users to view, modify, and distribute the source code. Examples include Selenium, JUnit, and TestNG.
Pair Testing: A testing approach where two team members work together at the same workstation to test the software. Typically, one person performs the testing while the other observes and reviews, offering feedback and suggestions.
Path Coverage: A white-box testing technique that ensures all possible execution paths in the code are tested at least once. This helps to identify untested code and improve overall test coverage.
Penetration Testing: A type of security testing where testers simulate attacks on a system to identify vulnerabilities that could be exploited by malicious parties. The goal is to find and fix security weaknesses before they can be exploited.
Performance Testing: Testing conducted to evaluate the speed, responsiveness, and stability of a software application under a particular workload. It includes load testing, stress testing, and endurance testing.
Portability Testing: Testing aimed at determining the ease with which software can be transferred from one environment to another, such as from one platform, operating system, or hardware configuration to another.
Positive Testing: Testing conducted with valid input data to verify that the software behaves as expected. This is the opposite of negative testing, which uses invalid inputs.
Priority: The level of importance assigned to a defect or test case, indicating the order in which it should be addressed. High-priority items typically need to be resolved or executed before lower-priority ones.
Probe Effect: The phenomenon where the act of measuring or testing a system can affect its performance. This is particularly relevant in performance testing and debugging.
Process Maturity: The extent to which an organization's processes are defined, managed, measured, and controlled. Higher process maturity levels typically indicate more efficient and effective processes.
Prototype Testing: Testing of early models or prototypes of a software application to gather feedback and identify issues before the final product is developed. This helps in refining requirements and design.
Pseudo-Random Testing: Testing with inputs that are generated in a way that appears random but is actually determined by an algorithm. This can help ensure a broad range of test cases while maintaining reproducibility.
Python Unit Testing (PyUnit): A unit testing framework for Python, modeled after JUnit for Java. It is used to write and run repeatable automated tests for Python code.
Quality: The degree to which a software product meets specified requirements, customer needs, and expectations, as well as the absence of defects.
Quality Assurance (QA): A set of activities designed to ensure that the development and maintenance processes are adequate to produce a software product with the desired quality. QA focuses on process improvement and adherence to standards.
Quality Control (QC): The process of executing a series of activities to ensure that a software product meets specified quality criteria. QC involves actual testing and reviewing to identify defects in the product.
Quality Gate: A checkpoint in the software development lifecycle where specific criteria must be met before proceeding to the next phase. Quality gates help ensure that the product maintains a certain level of quality at each stage.
Quality Management System (QMS): A formalized system that documents processes, procedures, and responsibilities for achieving quality objectives. A QMS helps coordinate and direct an organization's activities to meet customer and regulatory requirements.
Quality Metric: A standard of measurement used to quantify various aspects of software quality, such as defect density, test coverage, and customer satisfaction. Metrics help in evaluating and improving the quality of the product.
Quality Policy: A document that states an organization's overall intentions and direction regarding quality, as formally expressed by top management. The quality policy provides a framework for setting quality objectives.
Quality Risk: The potential for a software product to fail to meet quality standards or customer expectations. Quality risks are identified, analyzed, and managed to mitigate their impact on the product.
Quality Threshold: The minimum acceptable level of quality for a software product. Products must meet or exceed the quality threshold to be considered acceptable for release.
Quarantine: A process in which a product or component is isolated because it is suspected or known to be defective. The quarantined item is held until it can be tested and either fixed or discarded.
Regression Testing: Testing that is performed after changes are made to the software to ensure that the existing functionality still works correctly and that new defects have not been introduced.
Release Candidate (RC): A version of the software that is potentially ready for release, having passed all necessary tests and quality checks. It is the final stage before the official release.
Reliability: The degree to which a software system performs its intended functions under specified conditions for a specified period of time. Reliability testing focuses on ensuring the software can perform reliably.
Release Note: A document that provides information about the new features, enhancements, and fixes included in a new release of the software. Release notes help users understand what changes have been made.
Requirement: A documented representation of a condition or capability needed by a user to solve a problem or achieve an objective. Requirements define what the software should do and how it should perform.
Requirements Traceability Matrix (RTM): A document that maps and traces user requirements with test cases. It ensures that all requirements are covered by test cases and helps track the status of testing efforts.
Review: A process in which a work product, such as a requirements document or code, is examined by one or more individuals to identify defects and suggest improvements. Types of reviews include peer reviews, inspections, and walkthroughs.
Risk: The possibility of a negative or undesirable outcome, such as a defect or failure. Risk-based testing involves prioritizing testing efforts based on the level of risk associated with different parts of the software.
Risk Assessment: The process of identifying, analyzing, and evaluating risks to determine their potential impact on a project or system. It helps in making informed decisions about risk mitigation strategies.
Robustness: The degree to which a software system can function correctly in the presence of invalid inputs or stressful environmental conditions. Robustness testing ensures that the software can handle unexpected situations gracefully.
Root Cause Analysis (RCA): A method used to identify the underlying causes of defects or failures. RCA helps in understanding why a problem occurred and implementing measures to prevent its recurrence.
Run Book: A detailed set of instructions and procedures for operating a system or performing specific tasks. In testing, a run book may include step-by-step instructions for executing test cases.
Sanity Testing: A type of testing performed after receiving a software build to determine if the new functionality works as expected. It is a subset of regression testing focused on verifying specific areas of functionality.
Scalability Testing: Testing conducted to determine how well a software application can scale up in terms of performance, number of users, or other factors. The goal is to identify the software's ability to grow and handle increased loads.
Scenario Testing: A type of testing where test cases are derived from scenarios that describe end-to-end functionality. Scenarios often represent real-world use cases or user stories.
Security Testing: Testing aimed at ensuring that a software application is protected against threats and vulnerabilities. This includes testing for data protection, authentication, authorization, and other security aspects.
Smoke Testing: A preliminary test to check the basic functionality of a software application. It is often referred to as a "build verification test" and helps ensure that the most critical functions work correctly before further testing is performed.
Software Quality: The degree to which a software product meets specified requirements, customer needs, and expectations. It includes attributes such as functionality, reliability, usability, efficiency, maintainability, and portability.
Software Testing: The process of evaluating a software application to identify defects, ensure it meets specified requirements, and verify that it works as intended. This includes various testing techniques and methodologies.
Spike Testing: A type of performance testing where the system is subjected to extreme changes in load to observe how it responds. The goal is to determine if the system can handle sudden spikes in user activity or data volume.
Static Testing: Testing that involves examining the software's documentation, code, and other artifacts without executing the code. Techniques include reviews, inspections, and walkthroughs.
Stress Testing: A type of performance testing that evaluates how a software system behaves under extreme conditions, such as high load or limited resources. The goal is to identify the breaking point and ensure the system can handle stress gracefully.
System Testing: Testing conducted on a complete, integrated system to verify that it meets its specified requirements. System testing focuses on validating the end-to-end functionality of the application.
System Under Test (SUT): The specific system or component being tested. The term is used to distinguish the target of testing activities from other systems or components.
Systematic Testing: An approach to testing that follows a predefined set of procedures and methodologies to ensure thorough and consistent testing. It involves planning, designing, executing, and evaluating tests systematically.
Selenium: An open-source tool for automating web browsers. It is commonly used for functional and regression testing of web applications.
Statement Coverage: A white-box testing technique that ensures each statement in the code has been executed at least once during testing. It helps in identifying untested parts of the code.
Test Case: A set of input values, execution preconditions, expected results, and execution postconditions, developed for a particular objective or test condition to determine whether the system meets requirements.
Test Coverage: A measure of the amount of testing performed by a set of tests. It typically refers to the percentage of software requirements, features, or lines of code tested.
Test Data: The data used during the execution of tests. Test data can be static (predefined) or dynamic (generated during testing) and should cover both typical and edge case scenarios.
Test Design Specification: A document that specifies the test conditions, test cases, and test data required to implement the testing requirements identified in the test plan.
Test Driven Development (TDD): A software development approach where test cases are developed to specify and validate what the code will do. In TDD, tests are written before the code and the code is improved to pass the tests.
Test Environment: The configuration of hardware, software, and other resources required to conduct testing. It includes the system under test, test tools, and other support software.
Test Execution: The process of running test cases on the software to validate that the software behaves as expected. Test execution includes logging outcomes and analyzing results.
Test Harness: A collection of software and test data configured to test a program unit by running it under varying conditions and monitoring its output.
Test Level: The hierarchical levels of testing (e.g., unit testing, integration testing, system testing, acceptance testing) applied to a system or component at various stages of development and maintenance.
Test Log: A chronological record of all relevant details about the execution of tests. It typically includes the identity of the tester, test cases executed, defects found, and other important events during testing.
Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, features to be tested, testing tasks, responsibilities, and risks.
Test Policy: A high-level document that outlines the principles, approach, and major objectives of the organization regarding testing. It guides the testing process and helps align it with the organization’s goals.
Test Procedure: A document specifying a sequence of actions for the execution of a test. It includes steps to set up the environment, execute test cases, and restore the environment to its initial state.
Test Script: A set of instructions executed by a test tool to perform a specific test. Test scripts are typically written for automated testing.
Test Strategy: A high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects). It is a general approach to testing, aligning with the test policy.
Test Summary Report: A document summarizing the testing activities and results. It includes an assessment of the testing effort and provides recommendations based on the findings.
Test Suite: A collection of test cases intended to be executed together. A test suite can also be referred to as a test set or test batch.
Testability: The degree to which a software system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
Tester: An individual responsible for identifying defects in the software by executing test cases. Testers evaluate the software to ensure it meets its requirements and is free of defects.
Traceability Matrix: A document that maps and traces user requirements with test cases. It helps ensure that all requirements are covered by test cases and assists in tracking the status of testing efforts.
Trunk: In version control, the trunk is the main development line from which branches are created. It represents the primary line of development in a project.
Unit Testing: A level of software testing where individual units or components of the software are tested. The purpose is to validate that each unit of the software performs as designed.
Usability Testing: Testing conducted to evaluate how easily users can learn and use a product or system. It focuses on the user interface and user experience aspects of the software.
Use Case: A description of a system’s behavior as it responds to a request from one of its stakeholders. Use cases are used to capture functional requirements and design test cases.
Use Case Testing: A black-box testing technique that uses use cases to derive test cases. It ensures that all functional requirements are tested through user interactions with the system.
User Acceptance Testing (UAT): Testing conducted to determine whether a system satisfies the acceptance criteria and to enable the user to determine whether to accept the system. It is typically the final phase of testing before the software goes live.
User Interface (UI): The space where interactions between humans and machines occur. UI testing involves ensuring that the software interface works as expected and provides a good user experience.
User Story: A short, simple description of a feature told from the perspective of the user or customer. User stories are used in agile methodologies to define system functionalities and create test cases.
Utility Testing: Testing to ensure that the software meets the utility requirements, which refer to the functionality that is directly related to the satisfaction of the user’s needs.
UML (Unified Modeling Language): A standardized modeling language used to specify, visualize, construct, and document the artifacts of software systems. UML diagrams are often used in designing and documenting tests.
Update Testing: Testing conducted to ensure that updates to the software do not introduce new defects and that the updated system still meets the specified requirements.
Uninstall Testing: Testing to verify that an application can be completely and cleanly removed from the system without leaving residual data or causing issues.
Usability Requirements: Requirements that specify how easy it should be for users to learn, use, and interact with a system. Usability testing ensures these requirements are met.
Validation: The process of evaluating software during or at the end of the development process to determine whether it meets specified requirements. Validation ensures that "you built the right thing."
Verification: The process of evaluating work products (not the final product) to determine whether they meet the specified requirements. Verification ensures that "you built it right."
Version Control: The management of changes to documents, programs, and other information stored as computer files. Tools such as Git and SVN are commonly used for version control in software development.
Vertical Testing: Testing that involves evaluating specific sections of a system or application in detail. It contrasts with horizontal testing, which covers a broad range of functionalities but in less depth.
V-Model: A software development model that maps the types of tests to each stage of development in a V-shaped diagram. It highlights the relationship between each phase of the development lifecycle and its corresponding testing phase.
Volume Testing: A type of performance testing where the system is subjected to a large volume of data to evaluate its performance and behavior. It helps identify issues related to data handling and storage.
Vulnerability: A weakness or flaw in a software system that can be exploited to cause harm or unauthorized actions. Vulnerability testing aims to identify and fix these weaknesses.
Vulnerability Assessment: A systematic examination of an information system to identify security weaknesses. It includes scanning for vulnerabilities, evaluating their potential impact, and recommending remediation measures.
Virtual User (VU): A simulated user in performance testing tools that mimics the actions of a real user. Virtual users are used to generate load and measure the performance of the system under test.
Visual Testing: A type of testing that involves evaluating the visual aspects of the user interface to ensure they meet design specifications. It includes checking for layout issues, color schemes, font sizes, and other visual elements.
Validation and Verification (V&V): Combined processes that ensure the software meets its requirements and performs its intended functions. V&V includes both static and dynamic testing techniques.
Versioning: The process of assigning unique version numbers to different states of a software product. Versioning helps in tracking and managing changes over time.
Walkthrough: A type of peer review in which the author of a work product leads members of the development team and other stakeholders through the document and the participants ask questions and make comments about possible issues.
Waterfall Model: A sequential software development process in which progress flows downwards through phases such as requirements definition, design, implementation, testing, and maintenance. Each phase must be completed before the next phase begins.
White-Box Testing: A testing technique that examines the program structure and derives test data from the program logic/code. Also known as clear-box testing, open-box testing, or structural testing.
Wideband Delphi: An estimation technique used to predict project effort. It involves multiple experts who provide estimates anonymously, discuss them, and iteratively converge on a consensus estimate.
WBS (Work Breakdown Structure): A hierarchical decomposition of the total scope of work to be carried out by the project team to accomplish the project objectives. It helps in organizing and defining the total work scope.
Weighted Defect Density: A metric that assigns weights to different defects based on their severity and calculates a density figure to assess the quality of the software.
Window Testing: Testing conducted on applications that use graphical user interfaces to ensure the application behaves as expected within the GUI's windows.
Work Product: Any artifact produced during the software development lifecycle, including documents, code, diagrams, and test cases. Work products are reviewed and tested to ensure quality.
Workflow Testing: Testing aimed at verifying that business processes and workflows in the application function as expected. It often involves end-to-end testing scenarios.
Wrapper: A software component that provides an interface to another component. Wrappers are used in testing to isolate the component under test or to simulate interactions with other components.
X-Driven Testing (XDT): A generic term that can refer to various specific types of driven testing such as data-driven testing (DDT), keyword-driven testing (KDT), and behavior-driven testing (BDT). It emphasizes the use of specific inputs (data, keywords, behaviors) to drive the testing process.
XPath (XML Path Language): A language used for navigating through elements and attributes in an XML document. In testing, XPath expressions are often used in test scripts to locate elements within an XML structure.
XSS (Cross-Site Scripting): A security vulnerability typically found in web applications. It allows attackers to inject malicious scripts into webpages viewed by other users. Testing for XSS involves ensuring that user inputs are properly sanitized and encoded.
XML (eXtensible Markup Language): A markup language used to define rules for encoding documents in a format that is both human-readable and machine-readable. XML is often used in testing for configuration files, data interchange, and test case definitions.
XP (Extreme Programming): An agile software development methodology that emphasizes customer satisfaction, continuous feedback, and iterative development. XP practices include test-driven development (TDD) and pair programming, both of which are highly relevant to testing.
XQuery: A query language that can retrieve data from XML documents. In testing, XQuery is used to validate the contents of XML documents against expected values.
YAML (YAML Ain't Markup Language): A human-readable data serialization standard that can be used in conjunction with all programming languages. In testing, YAML files are often used for configuration, test data, and test case definitions.
Zero Defect Policy: A quality management approach that aims to achieve and maintain zero defects in software products. It emphasizes defect prevention throughout the development lifecycle.
Zero-Day Attack: A cyber attack that exploits a previously unknown vulnerability in software. Zero-day attacks occur before the software vendor becomes aware of the vulnerability and can release a patch.
Zero-Day Exploit: A piece of malicious code that takes advantage of a zero-day vulnerability in software. Zero-day exploits can cause significant damage because there is no available defense against them until a patch is released.
Zombie Code: In software testing and development, zombie code refers to sections of code that are no longer executed but remain in the codebase. It can increase maintenance efforts and potentially introduce bugs.