Skip to main content

Software Testing & QA Interview Questions

Master these 31 carefully curated interview questions to ace your next Software Testing & QA Interview Questions interview.

Quick Answer

Manual testing is human-executed; automated testing uses scripts to run tests repeatedly, faster, and more consistently.

Detailed Explanation

Manual: exploratory testing, usability testing, ad-hoc testing. Automated: regression testing, load testing, CI/CD integration. Automate: repetitive, high-risk, data-driven tests. Keep manual: exploratory, UX, edge cases needing judgment. Tools: Selenium, Cypress, Playwright, JUnit, pytest.

Quick Answer

Unit testing (individual functions), integration testing (module interactions), system testing (complete system), acceptance testing (user validation).

Detailed Explanation

Unit: test individual functions/methods in isolation. Mocks/stubs for dependencies. Integration: test module interactions, API contracts, database queries. System: end-to-end testing of complete application. Acceptance: UAT by stakeholders, validates business requirements. Testing pyramid: many unit tests, fewer integration, even fewer E2E.

Quick Answer

Testing pyramid recommends many unit tests (base), fewer integration tests (middle), and minimal E2E tests (top) for fast, reliable testing.

Detailed Explanation

Unit (70%): fast, isolated, run in ms. Integration (20%): test boundaries, APIs, databases. E2E (10%): slow, brittle, simulate real user flows. Anti-pattern: ice cream cone (mostly E2E). Benefits of pyramid: fast feedback, reliable results, easier debugging. Each layer catches different types of bugs.

Quick Answer

TDD writes tests before code: Red (write failing test) → Green (make it pass) → Refactor (improve code).

Detailed Explanation

Cycle: (1) Write a failing test for desired behavior. (2) Write minimum code to pass the test. (3) Refactor for clean code. Benefits: better design, high coverage, living documentation, confidence to refactor. Challenges: learning curve, slower initially, over-testing. BDD extends TDD with business language (Given-When-Then).

Quick Answer

API testing validates endpoints for correct responses, status codes, data format, error handling, and performance.

Detailed Explanation

Test: (1) HTTP methods (GET, POST, PUT, DELETE). (2) Status codes (200, 201, 400, 401, 404, 500). (3) Response body structure and data types. (4) Authentication/authorization. (5) Error handling and validation. (6) Performance. Tools: Postman, RestAssured, Supertest, Cypress. Contract testing: Pact for API compatibility between services.

Quick Answer

Performance testing evaluates speed, scalability, and stability under load: load testing, stress testing, and spike testing.

Detailed Explanation

Types: Load (expected users), Stress (beyond capacity), Spike (sudden surge), Endurance (sustained load), Scalability (growth testing). Metrics: response time, throughput, error rate, resource utilization. Tools: JMeter, k6, Gatling, Locust. Identify: bottlenecks, memory leaks, connection limits, database slow queries.

Quick Answer

POM separates page elements and actions into classes, making test code reusable, readable, and maintainable.

Detailed Explanation

Structure: each page/component has a class with locators and methods. Tests call page methods instead of directly interacting with elements. Benefits: DRY (change locator in one place), readable tests, component reuse. Example: LoginPage.login(user, password) wraps finding elements, typing, clicking. Used with Selenium, Playwright, Cypress.

Quick Answer

Integrate tests in CI pipeline: unit tests on every commit, integration tests on PR, E2E tests before deployment.

Detailed Explanation

Pipeline: (1) Pre-commit: lint, format. (2) CI (every push): unit tests, build. (3) PR merge: integration tests, code coverage check. (4) Pre-deploy: E2E tests in staging. (5) Post-deploy: smoke tests in production. Tools: GitHub Actions, Jenkins, GitLab CI. Reporting: Allure, HTML reports. Parallel execution for speed. Flaky test detection and quarantine.

Quick Answer

Parallelize tests, remove duplicates, mock external services, optimize setup/teardown, and prioritize by risk.

Detailed Explanation

Steps: (1) Parallel execution (pytest -n auto, parallel Cypress). (2) Remove redundant tests. (3) Mock external APIs/databases for unit tests. (4) Shared setup (fixtures, beforeAll). (5) Faster assertions. (6) Split by priority: run critical tests in CI, full suite nightly. (7) Test impact analysis: only run affected tests. (8) Container reuse for integration tests.

Quick Answer

Google uses extensive automated testing, code reviews, testing culture, and tools like Bazel for fast, reliable builds.

Detailed Explanation

Practices: (1) Monorepo with millions of tests. (2) Bazel: fast, hermetic builds. (3) Mandatory code review. (4) Test pyramid enforced. (5) Testing blog and best practices. (6) Mutation testing for test quality. (7) Canary releases. (8) Google Testing Excellence team. (9) Citizen test certification program. Key insight: testing is a culture, not just a process.

Quick Answer

Functional tests verify what the system does (features); non-functional tests verify how it performs (speed, security, usability).

Detailed Explanation

Functional: unit testing, integration testing, system testing, acceptance testing, regression testing, smoke testing. Validates business requirements and user workflows. Non-functional: performance testing (load, stress), security testing (penetration, vulnerability), usability testing, compatibility testing, reliability testing. Both are essential: functional ensures correctness, non-functional ensures quality attributes. Non-functional requirements are often implied but must be explicitly tested.

Quick Answer

Regression testing re-runs existing tests after code changes to ensure new code hasn't broken previously working functionality.

Detailed Explanation

When: after bug fixes, new features, refactoring, dependency updates, configuration changes. Types: full regression (all tests), partial regression (affected areas), selective (risk-based prioritization). Automation: regression suites should be automated — Selenium, Cypress, Playwright for UI; API tests for backend. Prioritization: critical path first, recently broken areas, frequently changing modules. CI/CD: run regression on every merge request. Smoke test → sanity test → full regression is typical pipeline.

Quick Answer

Smoke testing verifies basic functionality works after a build; sanity testing checks specific functionality after a minor change.

Detailed Explanation

Smoke testing: breadth-first, 'does the build work at all?', run on every new build, covers major features superficially. Example: can login, homepage loads, checkout works. Sanity testing: depth-focused, 'does this specific fix/feature work?', run after specific changes, covers narrow area deeply. Example: after login bug fix, thoroughly test all login scenarios. Smoke is a subset of regression. Sanity is a subset of regression for specific areas. Both are gatekeeper tests — fail = stop further testing.

Quick Answer

Stubs return predefined data; mocks verify interactions; fakes have working implementations; spies record calls for assertions.

Detailed Explanation

Stub: returns hardcoded values, no behavior verification. Example: stub that always returns user object. Mock: set expectations on how it should be called, verify interactions. Example: verify repository.save() was called with specific user. Fake: lightweight working implementation. Example: in-memory database instead of real DB. Spy: wraps real object, records calls for later assertion. Example: verify how many times and with what args method was called. Choosing: stubs for queries, mocks for commands, fakes for complex dependencies, spies for monitoring behavior.

Quick Answer

Equivalence partitioning divides inputs into groups of equivalent behavior; boundary value analysis tests at edges of those groups.

Detailed Explanation

Equivalence partitioning: divide input domain into classes where all values should behave identically. Test one from each class. Example: age field (1-17: minor, 18-64: adult, 65+: senior) — test one from each partition. Boundary value analysis: test at partition boundaries where bugs most likely occur. Example: test 0, 1, 17, 18, 64, 65, max. BVA typically tests boundary-1, boundary, boundary+1. Combined: minimum test cases with maximum coverage. Used in: unit tests, API validation, form testing. Off-by-one errors caught at boundaries.

Quick Answer

TDD follows Red-Green-Refactor: write failing test first, write minimal code to pass, then refactor — tests drive the design.

Detailed Explanation

Cycle: (1) Red: write test for desired behavior — it fails (no implementation yet). (2) Green: write simplest code to make test pass. (3) Refactor: improve code while keeping tests green. Benefits: high test coverage, better design (testable by nature), documentation through tests, confidence in refactoring. ATDD (Acceptance TDD): start with acceptance criteria. BDD: describe behavior in domain language (Given-When-Then with Cucumber/SpecFlow). Criticism: slower initially, strict adherence can be dogmatic. Pragmatic TDD: test first for complex logic, test after for simple CRUD.

Quick Answer

A test plan defines scope, approach, resources, schedule, environments, entry/exit criteria, and risk assessment for testing.

Detailed Explanation

Components: (1) Objectives: what are we testing and why. (2) Scope: in-scope and out-of-scope features. (3) Test strategy: types of testing, levels, automation vs manual. (4) Test environment: hardware, software, test data requirements. (5) Schedule: milestones, deadlines. (6) Resources: team, tools, training. (7) Entry criteria: when to start (build available, environment ready). (8) Exit criteria: when to stop (coverage %, zero critical bugs, all blocked resolved). (9) Risk assessment: identify risks and mitigation. (10) Deliverables: test reports, defect logs. IEEE 829 standard for test documentation.

Quick Answer

Performance testing measures system response time, throughput, and stability under load using tools like JMeter, k6, and Gatling.

Detailed Explanation

Types: Load testing (expected users), stress testing (beyond capacity — find breaking point), endurance/soak testing (sustained load — memory leaks), spike testing (sudden traffic surge), scalability testing (adding resources). Tools: JMeter (Java, GUI), k6 (JavaScript, modern), Gatling (Scala), Locust (Python), Artillery (Node.js). Metrics: response time (p50, p95, p99), throughput (requests/sec), error rate, CPU/memory usage. Process: define SLAs → create realistic scenarios → execute → analyze → optimize → retest. Monitor: APM tools (New Relic, Datadog) during tests.

Quick Answer

Automate test execution in CI/CD stages: unit tests on commit, integration tests on PR, E2E tests before deployment.

Detailed Explanation

Pipeline stages: (1) Commit: lint, unit tests (seconds). (2) PR/merge: integration tests, API tests (minutes). (3) Staging: E2E tests, performance tests, security scans (minutes-hours). (4) Production: smoke tests, canary deployment, monitoring. Test pyramid: many unit tests (fast), fewer integration, few E2E (slow). Parallel execution: split test suites across workers. Flaky test management: quarantine, retry, fix. Test reporting: JUnit XML format, coverage reports. Quality gates: minimum coverage %, zero critical issues. Tools: GitHub Actions, Jenkins, GitLab CI, Azure DevOps.

Quick Answer

API testing validates endpoints for correct responses, status codes, error handling, authN/authZ, and contract compliance.

Detailed Explanation

Coverage: (1) Happy path: correct request → expected response. (2) Validation: invalid input → proper error codes (400, 422). (3) Authentication: unauthorized → 401, forbidden → 403. (4) Edge cases: empty arrays, null values, max lengths, special characters. (5) Performance: response time under load. (6) Contract: response matches schema (OpenAPI/Swagger). Tools: Postman/Newman, REST Assured (Java), SuperTest (Node), pytest + requests (Python). Techniques: data-driven testing, schema validation, snapshot testing. CI: Newman collections in pipeline. Contract testing: Pact for consumer-driven contracts.

Quick Answer

Reproduce, assess impact, communicate to stakeholders, create hotfix, test fix, deploy with rollback plan, post-mortem.

Detailed Explanation

Steps: (1) Reproduce: confirm bug, identify steps. (2) Assess severity: users affected, data integrity, security. (3) Communicate: notify team lead, product owner, affected users if needed. (4) Workaround: temporary mitigation if possible. (5) Root cause: analyze logs, error tracking (Sentry, Bugsnag). (6) Hotfix: minimal targeted fix, not feature development. (7) Test: automated tests for the fix, regression on affected areas. (8) Deploy: follow hotfix process, have rollback ready. (9) Verify: monitor in production. (10) Post-mortem: why wasn't it caught, add test coverage, update test plan.

Quick Answer

Test valid credentials, invalid inputs, edge cases, security scenarios, accessibility, performance, and cross-browser compatibility.

Detailed Explanation

Functional: valid login, invalid password, invalid username, empty fields, SQL injection in inputs, XSS in inputs, case sensitivity, special characters, max length, remember me, forgot password link, session creation. Security: brute force protection, account lockout, CAPTCHA, HTTPS, token storage. Usability: error messages helpful but not revealing, password visibility toggle, autofill, tab order. Performance: concurrent logins, response time. Accessibility: screen reader, keyboard navigation, contrast. Cross-browser: Chrome, Firefox, Safari, Edge. Mobile: responsive layout, touch targets.

Quick Answer

Mutation testing modifies source code (mutations) to check if tests detect the changes — surviving mutants indicate test gaps.

Detailed Explanation

Process: (1) Create mutants: change operators (> to <), remove conditions, alter return values. (2) Run tests against each mutant. (3) If test fails: mutant killed (tests are effective). (4) If test passes: mutant survived (test gap found). Mutation score = killed/total mutants. Tools: Stryker (JS/TS/.NET), PITest (Java), mutmut (Python). Benefits: measures test quality better than code coverage — 100% coverage doesn't mean tests catch bugs. Limitations: slow (many mutants), equivalent mutants (same behavior). Use selectively on critical code.

Quick Answer

Prioritize by risk (critical paths, revenue impact), change frequency, defect history, and customer impact using risk-based testing.

Detailed Explanation

Strategies: (1) Risk-based: test high-risk areas first (payment, auth, data integrity). (2) Change-based: test areas modified in current release. (3) Defect density: areas with most historical bugs. (4) Customer impact: features used by most users. (5) Business criticality: revenue-generating features. (6) Test pyramid: fast unit tests exhaustively, slow E2E tests for critical flows. (7) Automation: automate regression, manual for exploratory. (8) Coverage analysis: identify untested areas. (9) Pareto principle: 80% of bugs in 20% of code. Tools: test management (TestRail, Zephyr) with priority tags.

Quick Answer

Contract testing verifies API consumers and providers agree on request/response formats independently, catching integration issues early.

Detailed Explanation

Problem: integration tests are slow and flaky. Solution: consumer defines expected requests/responses (contract). Provider verifies it meets all consumer contracts. Tool: Pact (most popular). Consumer test: create mock provider, define interactions. Provider test: replay consumer expectations against real provider. Broker: Pact Broker stores and shares contracts. Benefits: fast (no network), independent team testing, catches breaking changes before deployment. vs Integration testing: contracts test format compatibility, integration tests test behavior. Best for microservices with multiple consumers.

Quick Answer

Exploratory testing is simultaneous test design and execution where testers actively learn the system while testing, guided by intuition.

Detailed Explanation

Charter: time-boxed session (60-90 min) with goal. Session notes: what was tested, bugs found, questions raised. Heuristics guide exploration: boundaries, states, configurations, error handling. Benefits: finds bugs automated tests miss, adapts to what's discovered, exercises user workflows naturally. Complements scripted testing — not a replacement. Experience-based: senior testers find more bugs. Documentation: session-based test management (SBTM). When to use: new features, complex workflows, before release, after major changes. Tools: screen recording, note-taking during sessions.

Quick Answer

Use Page Object Model for UI tests, layered architecture separating tests from framework, data-driven approach, and CI integration.

Detailed Explanation

Architecture: (1) Page Object Model (POM): encapsulate page elements and actions in classes. (2) Page Factory: create page objects. (3) Test data management: external files (JSON, CSV), factories, database seeding. (4) Configuration: environment-specific settings. (5) Reporting: Allure, ExtentReports. (6) Utilities: common functions, waits, screenshots on failure. (7) CI integration: parallel execution, retry logic, artifact storage. Frameworks: Selenium + TestNG (Java), Cypress/Playwright (JS), pytest + Selenium (Python). Design patterns: Screenplay pattern, Builder pattern for test data.

Quick Answer

Shift-left moves testing earlier in development (unit tests, code review); shift-right monitors in production (synthetic tests, observability).

Detailed Explanation

Shift-left: (1) Requirements review (prevent defects). (2) Static analysis in IDE. (3) Unit tests with TDD. (4) Code review with testing focus. (5) Integration tests in PR. Benefits: bugs found earlier are cheaper to fix. Shift-right: (1) A/B testing in production. (2) Synthetic monitoring (scheduled API checks). (3) Chaos engineering (inject failures). (4) Canary deployments (small % traffic). (5) Feature flags for gradual rollout. (6) Real user monitoring (RUM). Both complement each other — shift-left prevents bugs, shift-right catches what slipped through. DevOps and continuous testing mindset.

Quick Answer

Quarantine flaky tests, investigate root causes, fix timing issues, add proper waits, and track flakiness metrics.

Detailed Explanation

Causes: (1) Timing: race conditions, insufficient waits. (2) Environment: shared state, database pollution. (3) Network: external service flakiness. (4) Order dependency: tests depend on execution sequence. (5) UI: dynamic selectors, animations. Fixes: (1) Explicit waits over sleep. (2) Test isolation: clean state before/after. (3) Mock external services. (4) Retry logic with quarantine (max 2 retries). (5) Unique test data per run. (6) Headless browser for consistency. Metrics: track flakiness rate per test. Quarantine: move to separate suite, investigate ASAP. Zero tolerance: flaky tests erode team confidence in test suite.

Quick Answer

Chaos engineering intentionally injects failures in production to test system resilience and uncover weaknesses before real incidents.

Detailed Explanation

Principles: (1) Define steady state (normal behavior metrics). (2) Hypothesize: system should remain stable under failure. (3) Inject: network latency, server crash, dependency failure, disk fill. (4) Observe: does system degrade gracefully? Tools: Chaos Monkey (Netflix — random instance termination), Gremlin (commercial), Litmus (Kubernetes), Azure Chaos Studio. Game days: planned chaos exercises with team involvement. Blast radius: start small (single service), expand gradually. Prerequisites: monitoring, alerting, logging must be in place. Combining: functional testing ensures correctness, chaos engineering ensures resilience.

Ready to master Software Testing & QA Interview Questions?

Start learning with our comprehensive course and practice these questions.