Framework Maintenance Best Practices
A test framework that isn't maintained quickly becomes a burden rather than an asset. Tests go stale, locators break, environments drift, and the team starts ignoring failing tests. Framework maintenance is a professional discipline — it includes locator management, flaky test elimination, dependency updates, and report monitoring. Neglecting it is the leading cause of abandoned automation suites.
Framework Health Metrics and Maintenance Strategy
- Pass rate target: >95% on main branch (if below, stop adding tests until you fix the broken ones)
- Flaky test threshold: < 2% of tests should be flaky — treat flaky tests as a bug, not a fact of life
- Locator audit: Review and update locators quarterly or after major UI redesigns
- Dependency updates: Run 'pip install --upgrade -r requirements.txt' monthly; old Selenium versions against new browser versions break silently
- Execution time monitoring: Test suite should run in < 15 minutes in CI; over that, add parallel execution
- Failure triage: Every CI failure should have an owner and a 48-hour resolution target
Practical Maintenance Patterns
# ══════════════════════════════════════════════════════════════
# FLAKY TEST DETECTION AND QUARANTINE
# ══════════════════════════════════════════════════════════════
# In pytest.ini or conftest.py — automatic retry for flaky tests:
# [pytest]
# addopts = --reruns 2 --reruns-delay 1 (pip install pytest-rerunfailures)
# Mark known flaky tests for quarantine (don't block CI):
@pytest.mark.flaky(reruns=3, reruns_delay=2)
def test_payment_animation_completes(driver):
# This test is timing-sensitive — retry up to 3 times
pass
# Quarantine flaky tests (skip in CI, fix in sprint):
@pytest.mark.skip(reason="FLAKY-042: Animation timing — assigned to @bob, sprint 24")
def test_carousel_auto_scrolls(driver):
pass
# ══════════════════════════════════════════════════════════════
# LOCATOR HEALTH CHECK SCRIPT
# ══════════════════════════════════════════════════════════════
# Run this script weekly to detect broken locators before tests run
import json
from selenium import webdriver
def audit_locators(url, locators_json):
"""Check all locators against the live application"""
driver = webdriver.Chrome()
driver.get(url)
broken = []
with open(locators_json) as f:
locators = json.load(f)
for page, elements in locators.items():
for element_name, selector in elements.items():
try:
found = driver.find_elements("css selector", selector)
if not found:
broken.append({"page": page, "element": element_name,
"selector": selector, "issue": "NOT_FOUND"})
except Exception as e:
broken.append({"page": page, "element": element_name,
"selector": selector, "issue": str(e)})
driver.quit()
if broken:
print(f"❌ {len(broken)} broken locators found:")
for b in broken:
print(f" [{b['page']}] {b['element']}: {b['selector']} → {b['issue']}")
else:
print("✅ All locators verified")
return broken
# ══════════════════════════════════════════════════════════════
# CI/CD PIPELINE HEALTH GATES
# ══════════════════════════════════════════════════════════════
# .github/workflows/test.yml:
# - name: Run Smoke Tests
# run: pytest tests/ -m smoke --tb=short -q
# # If smoke tests fail → block PR merge
#
# - name: Run Regression (parallel)
# run: pytest tests/ -m regression -n 4 --tb=line
# # 4 parallel workers, fail fast on critical errors
#
# - name: Upload Allure Results
# uses: actions/upload-artifact@v4
# with:
# name: allure-results
# path: allure-results/
# retention-days: 30Quick Quiz — Test Framework Architecture
Common Mistakes
- Ignoring flaky tests — treating intermittent failures as 'normal'; every flaky test is a reliability bug that must be fixed or quarantined
- Not pinning dependency versions — 'pip install selenium' may install a version that breaks existing tests; use requirements.txt with pinned versions
- Deleting old test cases without archiving — 'clean up unused tests' deletes historical coverage evidence; archive with @pytest.mark.archived instead
- Framework owned by one person — define team agreements about framework changes; a single developer exit should not cripple the test suite
Tip
Tip
Practice Framework Maintenance Best Practices in small, isolated examples before integrating into larger projects. Breaking concepts into small experiments builds genuine understanding faster than reading alone.
Technical diagram.
Practice Task
Note
Practice Task — (1) Write a working example of Framework Maintenance Best Practices from scratch without looking at notes. (2) Modify it to handle an edge case (empty input, null value, or error state). (3) Share your solution in the Priygop community for feedback.
Common Mistake
Warning
A common mistake with Framework Maintenance Best Practices is skipping edge case testing — empty inputs, null values, and unexpected data types. Always validate boundary conditions to write robust, production-ready software testing code.
Key Takeaways
- A test framework that isn't maintained quickly becomes a burden rather than an asset.
- Pass rate target: >95% on main branch (if below, stop adding tests until you fix the broken ones)
- Flaky test threshold: < 2% of tests should be flaky — treat flaky tests as a bug, not a fact of life
- Locator audit: Review and update locators quarterly or after major UI redesigns