Collateral: anything beyond software and hardware that is also part of the product, such as paper documents, web links and content, packaging, license agreements, etc..
Non-executable files: any files other than multimedia or programs, like text files, sample data, or help files.
Hardware: any hardware component that is integral to the product.
Interfaces: points of connection and communication between sub-systems.
Code: the code structures that comprise the product, from executables to individual routines.
Everything that the product does.
Testability: any functions provided to help test the product, such as diagnostics, log files, asserts, test menus, etc.
Interactions: any interactions or interfaces between functions within the product.
Error Handling: any functions that detect and recover from errors, including all error messages.
Multimedia: sounds, bitmaps, videos, or any graphical display embedded in the product.
Startup/Shutdown: each method and interface for invocation and initialization as well as exiting the product.
daily or month-end reports
nightly batch jobs
terms and warranty periods
Calculation: any arithmetic function or arithmetic operations embedded in other functions.
Application: any function that defines or distinguishes the product or fulfills core requirements.
Transformations: functions that modify or transform something (e.g. setting fonts, inserting clip art, withdrawing money from account).
System Interface: any functions that exchange data with something other than the user, such as with other programs, hard disk, network, printer, etc.
User Interface: any functions that mediate the exchange of data with the user (e.g. navigation, display, data entry).
Everything that the product processes.
Input: any data that is processed by the product.
Output: any data that results from processing by the product.
Preset: any data that is supplied as part of the product, or otherwise built into it, such as prefabricated databases, default values, etc.
Persistent: any data that is stored internally and expected to persist over multiple operations.
modes or states of the product
contents of documents
Sequences: any ordering or permutation of data, e.g. word order, sorted vs. unsorted data, order of tests.
Big and little: variations in the size and aggregation of data.
Noise: any data or state that is invalid, corrupted, or produced in an uncontrolled or incorrect fashion.
Lifecycle: transformations over the lifetime of a data entity as it is created, accessed, modified, and deleted.
Everything on which the product depends (and that is outside your project).
Internal Components: libraries and other components that are embedded in your product but are produced outside your project. Since you don’t control them, you must determine what to do in case they fail.
External Software: software components and configurations that are not a part of the shipping product, but are required (or optional) in order for the product to work: operating systems, concurrently executing applications, drivers, fonts, etc.
External Hardware: hardware components and configurations that are not part of the shipping product, but are required (or optional) in order for the product to work: CPU's, memory, keyboards, peripheral boards, etc.
How the product will be used.
Extreme Use: challenging patterns and sequences of input that are consistent with the intended use of the product.
Disfavored Use: patterns of input produced by ignorant, mistaken, careless or malicious use.
Common Use: patterns and sequences of input that the product will typically encounter. This varies by user.
Environment: the physical environment in which the product operates, including such elements as noise, light, and distractions.
Users: the attributes of the various kinds of users.
Any relationship between the product and time.
Concurrency: more than one thing happening at once (multi-user, time-sharing, threads, and semaphores, shared data).
Changing Rates: speeding up and slowing down (spikes, bursts, hangs, bottlenecks, interruptions).
Fast/Slow: testing with “fast” or “slow” input; fastest and slowest; combinations of fast and slow.
Input/Output: when input is provided, when output created, and any timing relationships (delays, intervals, etc.) among them.
Can it perform the required functions?
Will it work well and resist failure in all required situations?
Data Integrity: the data in the system is protected from loss or corruption.
Error handling: the product resists failure in the case of errors, is graceful when it fails, and recovers readily.
Safety: the product will not fail in such a way as to harm life or property.
How easy is it for a real user to use the product?
Learnability: the operation of the product can be rapidly mastered by the intended user.
Operability: the product can be operated with minimum effort and fuss.
Accessibility: the product meets relevant accessibility standards and works with O/S accessibility features.
How well is the product protected against unauthorized use or intrusion?
Security holes: the ways in which the system cannot enforce security (e.g. social engineering vulnerabilities)
Authorization: the rights that are granted to authenticated users at varying privilege levels.
Authentication: the ways in which the system verifies that a user is who she says she is.
- Privacy: the ways in which customer or employee data is protected from unauthorized people.
How well does the deployment of the product scale up or down?
How speedy and responsive is it?
How easily can it be installed onto its target platform(s)?
Upgrades: Can new modules or versions be added easily? Do they respect the existing configuration?
Uninstallation: When the product is uninstalled, is it removed cleanly?
Configuration: What parts of the system are affected by installation? Where are files and resources stored?
System requirements: Does the product recognize if some necessary component is missing or insufficient?
How well does it work with external components & configurations?
Resource Usage: the product doesn’t unnecessarily hog memory, storage, or other system resources.
Backward Compatibility: the products works with earlier versions of itself.
Hardware Compatibility: the product works with particular hardware components and configurations.
Operating System Compatibility: the product works with a particular operating system.
Application Compatibility: the product works in conjunction with other software products.
How economical will it be to provide support to users of the product?
How effectively can the product be tested?
How economical is it to build, fix or enhance the product?
How economical will it be to port or reuse the technology elsewhere?
How economical will it be to adapt the product for other places?
Regulations: Are there different regulatory or reporting requirements over state or national borders?
Language: Can the product adapt easily to longer messages, right-to-left, or ideogrammatic script?
Money: Must the product be able to support multiple currencies? Currency exchange?
Social or cultural differences: Might the customer find cultural references confusing or insulting?
Test what it can do
Identify things that the product can do (functions and sub- functions).
Determine how you’d know if a function was capable of working.
Test each function, one at a time.
See that each function does what it’s supposed to do and not what it isn’t supposed to do.
Divide and conquer the data
Look for any data processed by the product. Look at outputs as well as inputs.
Decide which particular data to test with.
Consider combinations of data worth testing together.
Overwhelm the product
Look for sub-systems and functions that are vulnerable to being overloaded or “broken” in the presence of challenging data or constrained resources.
Identify data and resources related to those sub-systems and functions.
Select or generate challenging data, or resource constraint conditions to test with
large or complex data structures
long test runs
many test cases
low memory conditions
Do one thing after another
Define test procedures or high level cases that incorporate multiple activities connected end-to-end.
Don’t reset the system between tests.
Vary timing and sequencing, and try parallel threads.
Test to a compelling story
Think about everything going on around the product.
Design tests that involve meaningful and complex interactions with the product.
A good scenario test is a compelling story of how someone who matters might do something that matters with the product.
Verify every claim
Identify reference materials that include claims about the product (implicit or explicit).
Analyze individual claims, and clarify vague claims.
Verify that each claim about the product is true.
If you’re testing from an explicit specification, expect it and the product to be brought into alignment.
Involve the users
Identify categories and roles of users.
Determine what each category of user will do (use cases), how they will do it, and what they value.
Get real user data, or bring real users in to test.
Otherwise, systematically simulate a user (be careful—it’s easy to think you’re like a user even when you’re not).
Powerful user testing is that which involves a variety of users and user roles, not just one.
Imagine a problem, then look for it.
What kinds of problems could the product have?
Which kinds matter most? Focus on those.
How would you detect them if they were there?
Make a list of interesting problems and design tests specifically to reveal them.
It may help to consult experts, design documentation, past bug reports, or apply risk heuristics.
Run a million different tests
Look for opportunities to automatically generate a lot of tests.
Develop an automated, high speed evaluation mechanism.
Write a program to generate, execute, and evaluate the tests.
Anyone who is a client of the test project.
Do you know who your customers are? Whose opinions matter? Who benefits or suffers from the work you do?
Do you have contact and communication with your customers? Maybe they can help you test.
Maybe your customers have strong ideas about what tests you should create and run.
Maybe they have conflicting expectations. You may have to help identify and resolve those.
Information about the product or project that is needed for testing.
Are there any engineering documents available? User manuals? Web-based materials?
Does this product have a history? Old problems that were fixed or deferred? Pattern of customer complaints?
Do you need to familiarize yourself with the product more, before you will know how to test it?
Is your information current? How are you apprised of new or changing information?
Is there any complex or challenging part of the product about which there seems strangely little information?
How you get along with the programmers.
Hubris: Does the development team seem overconfident about any aspect of the product?
Defensiveness: Is there any part of the product the developers seem strangely opposed to having tested?
Rapport: Have you developed a friendly working relationship with the programmers?
Feedback loop: Can you communicate quickly, on demand, with the programmers?
Feedback: What do the developers think of your test strategy?
Anyone who will perform or support testing.
Do you know who will be testing?
Are there people not on the “test team” that might be able to help?
People who’ve tested similar products before and might have advice?
Do you have enough people with the right skills to fulfill a reasonable test strategy?
Are there particular test techniques that the team has special skill or motivation to perform?
Is any training needed? Is any available?
Equipment & Tools
Hardware, software, or documents required to administer testing.
Hardware: Do we have all the equipment you need to execute the tests? Is it set up and ready to go?
Automation: Are any test automation tools needed? Are they available?
Probes: Are any tools needed to aid in the observation of the product under test?
Matrices & Checklists: Are any documents needed to track or record the progress of testing?
The sequence, duration, and synchronization of project events.
Test Design: How much time do you have? Are there tests better to create later than sooner?
Test Execution: When will tests be executed? Are some tests executed repeatedly, say, for regression purposes?
Development: When will builds be available for testing, features added, code frozen, etc.?
Documentation: When will the user documentation be available for review?
The product to be tested.
Scope: What parts of the product are and are not within the scope of your testing responsibility?
Availability: Do you have the product to test?
Volatility: Is the product constantly changing? What will be the need for retesting?
New Stuff: What has recently been changed or added in the product?
Testability: Is the product functional and reliable enough that you can effectively test it?
Future Releases: What part of your tests, if any, must be designed to apply to future releases of the product?
The observable products of the test project.
Media: How will you record and communicate your reports?
Standards: Is there a particular test documentation standard you’re supposed to follow?
Purpose: Are your deliverables provided as part of the product? Does anyone else have to run your tests?
Content: What sort of reports will you have to make? Will you share your working notes, or just the end results?
More Maps From User
Share this map
Copy the code to embed this map into your article. The embeded map can even be zoomed in / out