Defect Leakage and Escape Rate
Defect leakage and escape rate are closely related metrics that quantify customer-visible quality failures. Together they answer: how many quality problems are reaching our users? These are the metrics that executive leadership cares most about — they directly correlate with customer satisfaction, support costs, and revenue impact.
Defect Leakage vs Escape Rate
- Defect Leakage Rate: % of defects that were present in the system BEFORE testing but not found during testing. Formula: Leakage Rate = (Defects found after release / Total defects in release including post-release) × 100. Focus: how effective was testing coverage?
- Defect Escape Rate: Broader term — % of defects that escape ANY quality gate (could be internal gates: from unit testing to system testing, or external: from testing to production). Formula: Escape Rate = (Defects found in current phase / Defects found in current phase + all previous phases) × 100
- The difference in practice: Leakage is specifically about production escapes. Escape Rate can be tracked at every SDLC phase transition — from requirements review to design, design to development, unit testing to system testing, etc.
- Both matter: Track leakage to understand production quality. Track escape rates at each phase to identify where defects are being missed and invest improvement there
Reducing Defect Leakage — Investigation Framework
When leakage is high, investigate: (1) Coverage gap — are there requirements with no test cases? Run RTM analysis. (2) Priority bias — are we spending testing time on low-risk features while high-risk ones are under-tested? Review test case distribution vs risk profile. (3) Environment mismatch — are the defects only reproducible in production environments that differ from test environments? Investigate environment configuration differences. (4) Timing issue — are production defects related to timing, load, or data conditions that don't exist in testing? Consider performance or load testing. (5) New device/browser — are defects only appearing on devices/OS versions not covered in compatibility testing? Expand compatibility test matrix. For each production defect, conduct a mini-RCA: 'Why wasn't this found in testing?' The answer reveals your coverage gap.
Technical diagram.
Tip
Tip
Practice Defect Leakage and Escape Rate in small, isolated examples before integrating into larger projects. Breaking concepts into small experiments builds genuine understanding faster than reading alone.
Practice Task
Note
Practice Task — (1) Write a working example of Defect Leakage and Escape Rate from scratch without looking at notes. (2) Modify it to handle an edge case (empty input, null value, or error state). (3) Share your solution in the Priygop community for feedback.
Quick Quiz
Common Mistake
Warning
A common mistake with Defect Leakage and Escape Rate is skipping edge case testing — empty inputs, null values, and unexpected data types. Always validate boundary conditions to write robust, production-ready qa engineering code.
Key Takeaways
- Defect leakage and escape rate are closely related metrics that quantify customer-visible quality failures.
- Defect Leakage Rate: % of defects that were present in the system BEFORE testing but not found during testing. Formula: Leakage Rate = (Defects found after release / Total defects in release including post-release) × 100. Focus: how effective was testing coverage?
- Defect Escape Rate: Broader term — % of defects that escape ANY quality gate (could be internal gates: from unit testing to system testing, or external: from testing to production). Formula: Escape Rate = (Defects found in current phase / Defects found in current phase + all previous phases) × 100
- The difference in practice: Leakage is specifically about production escapes. Escape Rate can be tracked at every SDLC phase transition — from requirements review to design, design to development, unit testing to system testing, etc.