Test Coverage Measurement
Test coverage measures how much of the software is exercised by testing. There are multiple dimensions of coverage — requirements coverage (are all requirements tested?), feature coverage (are all features tested?), code coverage (what % of code is executed by tests?), and risk coverage (are high-risk areas adequately tested?). Understanding and measuring coverage reveals gaps before they become production defects.
Types of Test Coverage
- Requirements Coverage: % of requirements that have at least one passing test case. Formula: (Requirements with passing tests / Total requirements) × 100. Target: 100% of critical/high requirements, >90% overall. Requirements with 0 test cases are known quality risks
- Feature Coverage: % of application features that have been tested in the current release. Important when not all features have formal requirements — use feature list from Product Owner or release notes
- Test Case Coverage: % of planned test cases that have been executed. Execution coverage below 100% must be documented with justification (risk-based deferral) and stakeholder sign-off
- Code Coverage (unit testing): % of code lines, branches, or paths executed by unit tests. Typically measured by developers — QA ensures code coverage targets are in the Definition of Done. Industry targets: 70-80% line coverage minimum, 60% branch coverage. 100% coverage is impractical and doesn't guarantee quality
Communicating Coverage Gaps
Coverage gaps must be surfaced explicitly — never silently accepted. For each gap, document: What is not being tested (specific requirement, feature, or scenario), Why it's not being tested (time constraint, environment limitation, deferred decision), What's the risk (what could go wrong if this gap produces a production defect), Who accepted the risk (name and date of stakeholder sign-off). Example coverage gap communication in Test Summary Report: 'Payment with 3D Secure authentication was not tested (24 planned test cases deferred) due to sandbox environment limitations with our 3DS provider. Risk: 3DS authentication may fail for EU customers requiring PSD2 compliance. Risk accepted by: [PM name] on [date]. Mitigation: Monitor 3DS failure rate post-release and hotfix within 24 hours if failure rate exceeds 1%.'
Good tests = confidence to refactor.
Tip
Tip
Practice Test Coverage Measurement in small, isolated examples before integrating into larger projects. Breaking concepts into small experiments builds genuine understanding faster than reading alone.
Practice Task
Note
Practice Task — (1) Write a working example of Test Coverage Measurement from scratch without looking at notes. (2) Modify it to handle an edge case (empty input, null value, or error state). (3) Share your solution in the Priygop community for feedback.
Quick Quiz
Common Mistake
Warning
A common mistake with Test Coverage Measurement is skipping edge case testing — empty inputs, null values, and unexpected data types. Always validate boundary conditions to write robust, production-ready qa engineering code.
Key Takeaways
- Test coverage measures how much of the software is exercised by testing.
- Requirements Coverage: % of requirements that have at least one passing test case. Formula: (Requirements with passing tests / Total requirements) × 100. Target: 100% of critical/high requirements, >90% overall. Requirements with 0 test cases are known quality risks
- Feature Coverage: % of application features that have been tested in the current release. Important when not all features have formal requirements — use feature list from Product Owner or release notes
- Test Case Coverage: % of planned test cases that have been executed. Execution coverage below 100% must be documented with justification (risk-based deferral) and stakeholder sign-off