1. Requirements Factors
    1. Ambiguity
      1. Words and diagrams are always interpreted by people, and different people will often have different interpretations of things. More ambiguity means more likelihood that a bug can be introduced through honest misunderstanding.
    2. Very High Precision
      1. Sometimes a document will specify a higher level of precision than is necessary or achievable. Sometimes the product should behave in a way that is more precise than the specification suggests. In any case, higher the precision required, the more likely it is that the product will not meet that requirement.
    3. Mysterious Silence
      1. Sometimes a specification will leave out things that a tester might think are essential or important. This "mysterious" silence might indicate that the designers are not thinking enough about those aspects of the design, and therefore there are perhaps more bugs in it. This is commonly seen with error handling.
    4. Undecided Requirements
      1. The designers might have intentionally left parts of the product unspecified because they don’t yet know how it should work. Postponing the design of a system is a normal part of Agile development, for instance, but wherever that happens there is a possibility that a big problem will be hiding in those unknown details.
    5. Evolving Requirements
      1. Requirements are not static, they are changed and developed and extended. Any document is a representation of what some person believed at some time in the past; and when when a requirement is updated, it's possible that other requirements which SHOULD have changed, didn't. Fast evolving requirements often develop inconsistencies and contradictions that lead to bugs.
    6. Imported Requirements
      1. Sometimes requirement statements are "borrowed"— cut and pasted from other documents or even from other projects. These may include elements not appropriate to the current project.
    7. Hard to Read
      1. If the document is large, poorly formatted, repetitive, or otherwise hard to read, it is less likely to have been carefully written or properly reviewed.
    8. Non-Native Writers
      1. When the person writing the specification is not fluent in the specification's language, misunderstanding and error are likely.
    9. Non-Native Readers
      1. When the people reading and interpreting the specification are not fluent in the specification's language, misinterpretation is likely.
    10. Critical Feature
      1. The more important a feature is, the more important its bugs will be
    11. Strategic Feature
      1. A feature might be key to differentiating your product from a competitor; or might have a special notoriety that would make its bugs especially important.
    12. VIP Opinion
      1. A particular important person might be paying attention to a particular feature or configuration or type of use, making bugs in that area more important. Or the important person's fascination with one aspect of the product may divert needed attention from other parts of the product.
  2. Operational Factors
    1. Popular Feature
      1. The more people use a feature, the more likely any bugs in it will be found by users.
    2. Disconnection
      1. Different parts of a product that must work together may fall into incompatible states, leading to a failure of the system as a whole.
    3. Unreliable Platform
      1. Deployed products may exhibit problems due to variations or failures in the underlying supporting technology.
    4. Security Threats
      1. Malicious actors will attempt to break in.
    5. Misusable
      1. A feature might be easily misused, such that it might misbehave in a way that while not technically a flaw in the design, is still effectively a bug.
    6. Glaring Failure
      1. A problem or its consequences may be obvious to anyone who encounters it.
    7. Insidious Failure
      1. The causes or symptoms of a problem may be invisible or difficult to see for some time before they are noticed, allowing more trouble to build.
  3. Project Factors
    1. Learning Curve
      1. When developers are new to a tool, technology, or solution domain, they are likely to make mistakes. Many of those mistakes they will be unable to detect.
    2. Poor Control
      1. Code and other artifacts may not be under sufficient scrutiny or change control, allowing mistakes to be made and to persist. Also people may try to subvert weak controls when they perceive themselves to be under time pressure.
    3. Rushed Work
      1. The amount of work exceeds the time available to do it comfortably. Corners are likely to be cut; details are likely to be forgotten.
    4. Fatigue
      1. Programmers and other members of the development team are more likely to make mistakes when they're physically tired or even just bored.
    5. Overfamiliarity
      1. When people are immersed in a project or a community for an extended time, they may become blind to patterns of risks or problems that are easy for an outsider to see.
    6. Distributed Team
      1. When people are working remotely from each other, communication may become strained and difficult, simple collaborations become expensive, the conditions for the exchange of tacit knowledge are inhibited.
    7. Third-party Contributions
      1. Any part of a product contributed by a third-party vendor may contain hidden features and bugs, and the developers may otherwise not fully understand it.
    8. Bad Tools
      1. The project team may be saddled with tools that interfere with or constrain their work; or that may introduce bugs directly into their work.
    9. Expense of Fixes
      1. Some components or type of bugs may be especially expensive to fix, or take a long time to fix (platform bugs are typically like this). In that case, you may need to focus on finding those bugs especially soon.
    10. Not Yet Tested
      1. Any part of the product that hasn't yet been tested is obviously likely to have fresh bugs in it, compared to things that have been tested. Therefore, for instance, it may be better to focus on parts of the product that have not been unit tested.
  4. Technology Factors
    1. New Technology
      1. Over time, the risks associated with any new kind of technology will become apparent, so if your product uses the latest whizzy concept, it is more likely to have important and unknown bugs in it.
    2. New Code
      1. The newer the code you are testing, the more likely it is to have unknown problems.
    3. Old Code
      1. A product that has been around for a while may contain code that is unsuited to its current context, difficult to understand, or hard to modify.
    4. Changed Code
      1. Any recently changed code is more likely to have unknown problems.
    5. Brittle Code
      1. Some code may be written in a way that makes it difficult to change without introducing new problems. Even if this code never changes, it may be brittle in the sense that it tends to break when anything around it changes.
    6. Complexity
      1. The more different interacting elements a product has, the more ways it can fail; the more states or state transitions it has, the more states can be wrong.
    7. Failure History
      1. The more that a product or part of a product has failed in the past, the more you might expect it to fail in the future. Also, if a particular product has failed in a particularly embarrassing way, it perhaps should not be allowed to fail in that way again without bring the project team into disrepute.
    8. Dependencies Upstream
      1. One part of a system or one feature of a product may depend on data or conditions that are controlled by other components that come before it. The more upstream processing that must occur correctly, the more likely that any bugs in those processes may cause failure in the downstream component.
    9. Dependencies Downstream
      1. Any particular component that has many other components that rely on it will involve more risk, because the upstream bugs will propagate trouble downstream.
    10. Distributed Components
      1. A product may be comprised of things that spread out over a large area, connected by tenuous network links that introduce uncertainty, noise, or lag time into the system.
    11. Open-Ended Input
      1. The greater freedom there is in data, the more likely that a particular configuration of data could trigger a bug. Lack of filtering and bounding are especially a problem for security.
    12. Hard to Test
      1. When something is hard to test, perhaps because it is hard to observe or hard to control, there will be greater risk that bugs will go undetected, and it will require extra effort to find the important bugs.
    13. Hardware
      1. Hardware components can’t be changed easily. Hardware related problems must be found early because of the long lead time for fixing.