Regression Testing After Bug Fixes
Every bug fix introduces the risk of regression — the fix changes code that may affect functionality beyond the defect being resolved. Regression testing is the systematic process of verifying that a fix resolves the original defect AND that it hasn't broken anything else. Skipping or under-investing in regression is one of the most common causes of high production defect escape rates.
Regression Testing Strategy
- Retest the original defect: Verify the exact scenario described in the bug report now passes with the fix applied. Use the same test data, environment, and steps
- Test adjacent functionality: Identify all functionality that uses the same code changed by the fix — test all of it. A payment fix that touches SharedValidationUtils may affect every form that uses the same utility
- Check for side effects: Review the developer's change notes — what files were modified, what functions changed? Map those changes to test cases that cover the modified code
- Regression suite selection: For each bug fix, select a targeted regression subset rather than running all regression tests (efficient). For major fixes or core functionality changes, run the full regression suite
- Automated regression value: The compounding value of automated regression suites is realized here — when a fix is deployed, automated tests execute in minutes, providing immediate confidence that existing functionality is intact. Manual regression for every fix would be prohibitively time-consuming
Practical Example — Defect Lifecycle Tracker & Metrics
# Practical: Defect lifecycle, severity/priority, and density metrics in Python
from dataclasses import dataclass, field
from typing import List, Optional
from collections import Counter
@dataclass
class Defect:
bug_id: str
title: str
severity: str # Critical / High / Medium / Low
priority: str # High / Medium / Low
status: str = "New" # New→Open→Fixed→Retest→Closed / Deferred / Rejected
module: str = "Unknown"
defect_id: Optional[str] = None
def transition(self, new_status: str, actor: str = ""):
valid = {
"New": ["Open", "Rejected"],
"Open": ["Fixed", "Deferred"],
"Fixed": ["Retest"],
"Retest": ["Closed", "Open"], # Open = Reopened
"Closed": [],
"Deferred":["Open"],
"Rejected":[],
}
if new_status not in valid.get(self.status, []):
raise ValueError(f"Invalid transition: {self.status} → {new_status}")
self.status = new_status
print(f" [{self.bug_id}] {self.title[:40]:<40} → {new_status}")
class DefectTracker:
def __init__(self):
self.defects: List[Defect] = []
def add(self, defect: Defect):
self.defects.append(defect)
def open_by_severity(self) -> dict:
open_defects = [d for d in self.defects if d.status not in ("Closed", "Rejected")]
return dict(Counter(d.severity for d in open_defects))
def defect_density(self, module: str, kloc: float) -> float:
"""Defects per 1000 lines of code in a module."""
count = sum(1 for d in self.defects if d.module == module)
return round(count / kloc, 2)
# ─── Usage ───────────────────────────────────────────────────────────────────
tracker = DefectTracker()
d1 = Defect("BUG-001", "Login fails with + sign in email", "Critical", "High", module="Auth")
d2 = Defect("BUG-002", "Broken logo on homepage", "Low", "High", module="UI")
d3 = Defect("BUG-003", "Checkout total off by $0.01 for discount codes", "Medium", "Medium", module="Checkout")
tracker.add(d1); tracker.add(d2); tracker.add(d3)
print("─── Defect Lifecycle Transitions ────────────────────────")
d1.transition("Open"); d1.transition("Fixed"); d1.transition("Retest"); d1.transition("Closed")
d2.transition("Open"); d2.transition("Fixed"); d2.transition("Retest"); d2.transition("Open") # Reopened
d3.transition("Deferred")
print("\n─── Open Defects by Severity ─────────────────────────────")
print(tracker.open_by_severity()) # {'High': 1, 'Medium': 1}
print("\n─── Defect Density ───────────────────────────────────────")
print(f"Auth module (2 KLOC): {tracker.defect_density('Auth', 2.0)} defects/KLOC")
print(f"Checkout module (3 KLOC): {tracker.defect_density('Checkout', 3.0)} defects/KLOC")
# ─── 5 Whys RCA Example ──────────────────────────────────────────────────────
print("\n─── 5 Whys RCA: BUG-001 ──────────────────────────────────")
whys = [
"Login fails for emails containing '+'",
"The + sign is not URL-encoded before being sent to the API",
"Email validation uses a regex that strips special chars before encoding",
"The regex was copied from an old internal tool that assumed ASCII-only input",
"There is no input sanitization standard and no code review checklist for encoding",
]
for i, why in enumerate(whys, 1):
print(f" Why {i}: {why}")
print(" Root Cause → Add encoding standards checklist to code review process")Module 6 Review — Defect Management
Tip
Tip
Practice Regression Testing After Bug Fixes in small, isolated examples before integrating into larger projects. Breaking concepts into small experiments builds genuine understanding faster than reading alone.
Automate. Quarantine flaky.
Practice Task
Note
Practice Task — (1) Write a working example of Regression Testing After Bug Fixes from scratch without looking at notes. (2) Modify it to handle an edge case (empty input, null value, or error state). (3) Share your solution in the Priygop community for feedback.
Common Mistake
Warning
A common mistake with Regression Testing After Bug Fixes is skipping edge case testing — empty inputs, null values, and unexpected data types. Always validate boundary conditions to write robust, production-ready qa engineering code.
Key Takeaways
- Every bug fix introduces the risk of regression — the fix changes code that may affect functionality beyond the defect being resolved.
- Retest the original defect: Verify the exact scenario described in the bug report now passes with the fix applied. Use the same test data, environment, and steps
- Test adjacent functionality: Identify all functionality that uses the same code changed by the fix — test all of it. A payment fix that touches SharedValidationUtils may affect every form that uses the same utility
- Check for side effects: Review the developer's change notes — what files were modified, what functions changed? Map those changes to test cases that cover the modified code