Glossary of Software Testing Terms
- acceptance testing
- formal testing conducted to enable a user, customer, or other authorised
entity to determine whether to accept a system or component. Includes
user acceptance testing (UAT), alpha testing, and beta testing
- alpha testing
- simulated or actual operation testing at an in-house site not otherwise
involved with the software developers
- beta testing
- operational testing at a site not otherwise involved with the software
- big-bang testing
- integration testing in the small where no incremental testing takes
place prior to all the system's components being combined to form the testing
- blackbox testing
- tests based on the behaviour of the component or system, derived from
a specification. Also known as functional testing or behavioural testing
- bottom up
- an integration strategy for integration testing in the small where the
lowest level components are tested first, then used to facilitate the testing
of higher level components. The process is repeated until the component
at the top of the hierarchy is included.
- a conditional transfer of control from any statement to any other statement
in a component
- business process-based testing
- testing based on expected user profiles such as scenarios or use cases.
Used in system testing and acceptance testing.
- acroynm for computer-aided software testing
- completion criteria
- a criterion for determining when planned testing is complete, defined
in terms of a test measurement technique (e.g. coverage), cost, time or
faults found (number and/or severity). Also known as exit crieria
- component testing
- the testing of individual software components. Also known as unit
testing, module testing, or program testing.
- contract acceptance testing
- a form of acceptance testing against acceptance criteria defined in
- the degree, expressed as a percentage, to which a specified coverage
item (an entity or property used as a basis for testing) has been exercised
by a set of tests
- cyclomatic complexity
- a measure of the complexity of the code or a control flow graph,
which is equal to the number of decisions plus one.
- a program point at which the control flow has two or more alternative
- a specifically written program produced during integration testingin
the mall to call or invoke a baseline
- a human action that produces an incorrect result.
- exhaustive testing
- a test case design technique in which the test case suite comprises
all combinations of input values and preconditions for component variables.
- expected outcome
- the behaviour predicted by the specification of an object under specified
- deviation of the software from its expected delivery or service
- a manifestation of an error in software. A fault, if encountered, may
cause a failure.
- functional incrementation
- a strategy for combining components in integration testing in
the small where they are combined to achieve some minimum capability or
to follow a thread of execution of transactions
- functional requirement
- a requirement that specifies a function that a system or system component
must perform. (ANSI./IEEE Std 729-1983, Software Engineering Terminology)
- impact analysis
- assessing the effect of a change to an existing system, usually in
maintenance testing, to determine the amount of regression testing to be
- any significant unplanned event that occurs during testing that requires
subsequent investigation and/or correction. For example when expected
and actual test results are different. Incidents can be raised against
documentation as well as code. Incidents are logged when a person other
than the author of the product performs the testing.
- incremental testing
- integration testing in the small where system components are integrated
into the system one at a time until the entire system is integrated. See
also functional incrementation
- informal review
- a type of review which is undocumented, but useful, cheap and widely
- a group review quality improvement process for written material. It
consists of two aspects; product (document) improvent and process improvement
(of both document production and inspection). An inspection is led by
a trained leader or moderator (not the author), and includes defined roles,
metrics, rules, checklist, entry and exit criteria.
- the insertion of additional code into the program in order to collect
information about program behaviour during program execution. Performed
by coverage measurement tools in pre-compiler pass.
- integration testing in the large
- testing performed to expose faults in the interfaces and in the interaction
between integrated systems.
- integration testing in the small
- testing performed to exposed faults in the interfaces and in the interaction
between integrated components. Strategies include top-down, bottom-up
and functional incrementation.
- isolation testing
- component testing of individual components in isolation from surrounding
components, with surrounding components being simulated by stubs.
- a Linear Code Sequence and Jump, consisting of the following three items
(conventionally identified by line numbers in a source code listing): the
start of the linear sequence of executable statements, the end of the linear
sequence, and the target line to which the control flow is transferred at
the end of the linear sequence.
- maintenance testing
- testing changes (fixes or enhancements) to existing systems. May
include analysis of the impact of the change to decide what regression testing
should be done.
- model office
- an environment for system or user acceptance testing which is as close
to field use as possible.
- negative testing
- testing aimed at showing software does not work. Also know as dirty
- non-functional system testing
- testing of system requirements that do not relate to functionality.
i.e. performance, usability, security, etc. Also known as quality attributes
- oracle assumption
- the assumption a that a tester can routinely identify the correct outcome
of a test
- a mechanism to produce the expected outcomes to compare with the
expected outcomes of the Software Under Test (SUT)
- a sequence of executable statements of a component, from an entry point
to an exit point
- peer review
- a type of review which is documented, has defined fault detection
processes, and includes peers and technical experts but no managers.
Also known as a technical review
- environmental and state conditions which must be fulfilled before the
component can be executed with a particular input value.
- regression testing
- retesting of a previously tested program following modification to ensure
that faults have not been introduced or uncovered as a result of the changes
made, and that the modified system still meets its requirements. It is
performed whenever the software or its environment is chanaged.
- the probability that software will not cause the failure of a system
for a specified time under specified conditions.
- running a test more than once
- a process or meeting during which a work product, or set of work products,
is presented to project personnel, managers, users or other interested
parties for comment or approval. Types of review include walkthrough,
inspection, informal review and technical or peer review.
- static analysis
- analysis of a program carried out without executing the program.
Static analysis can provide information about the quality of the software
by giving objective measurements of characteristics of the software such
as cyclomatic complexity and nesting levels
- static testing
- testing of an object without execution on a computer. Includes static
analysis (done by a software program) and all forms of review.
- stress testing
- testing conducted to evaluate a system or component at or beyond the
limits of its specified requirements
- a skeletal or special purpose implementation of a software module, used
to develop or test a component that calls or is otherwise dependent on
it. Used in integration testing in the small.
- system testing
- the process of testing an integrated system to verify that it meets
specified requirements. Covers both functional and non-functional system
- technical review
- a type of review which is documented, has defined fault-detection processes,
and includes peers and technical experts but no managers. Also known as
- test case design technique
- a method used to derive or select test cases.
- test case
- as set of inputs, execution preconditions, and expected outcomes developed
for a particular objective, such as to exercise a particular program
path or to verify compliance with a specific requirement.
- test condition
- anything that could be tested.
- test control
- actions taken by a test manager, such as reallocating test resources.
This may involve changing the test schedule, test environments, number
of testers etc.
- test environment
- a description of the hardware and software environment in which the
test will be run, and any other software with which the software under
test interacts with under test including stubs and drivers.
- test plan
- a record of the test planning process detailing the degree of tester
independence, the test environment, the test case design techniques and
test measurement techniques to be used, and the rationale for their choice.
- test procedure
- a document for providing detailed instructions for the execution of
one or more test cases.
- test records
- for each test, an unambiguous record of the identities and versions
of the component or system under test, the test specification and actual
- test script
- commonly used to refer to the automated test procedure used with a test
- the process of exercising software to verify that it satisfies specified
requirements and to detect faults the measurement of software quality.
- an integration strategy for integration testing in the small, where
the component at the top of the component hierarchy is tested first, with
lower level components being simulated by stubs. Tested components are
then used to test lower level components. The process is repeated until
the lowest level components have been included.
- user acceptance testing
- also known as UAT. Part of acceptance testing. Customers or end
users perform or are closely involved with the tests, which may be based on
business processes or may use a model office.
- determination of the correctness of the products of software development
with respect to the user needs and requirements.
- the process of evaluating a system or component to determine whether
the products or the given development phase satifsy the conditions imposed
at the start of that phase.
- volume testing
- testing where the system is subjected to large volumes of data.
- a type of review of documents such as requirements, designs, tests or
code characterised by the author of the document guiding the progression
of the walkthrough. Participants are generally peers. Scenarios or dry
runs may be used.
- white box testing
- test case selection that is based on an analysis of the internal structure
of the component. Also known as structural testing or glass box testing.