Learning Curves — Diagnosing Overfitting & Underfitting
Learning curves plot training and validation scores as a function of training set size. They diagnose exactly what is wrong with your model: a persistent gap between train and val curves means overfitting (add regularization or data), converging curves both at a low score means underfitting (need a more complex model or better features). The most actionable diagnostic tool in ML.
Learning Curves for Bias-Variance Diagnosis
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import learning_curve, StratifiedKFold
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
cancer = load_breast_cancer()
X, y = cancer.data, cancer.target
models_to_diagnose = {
"Logistic Regression\n(may underfit)": Pipeline([("sc", StandardScaler()), ("m", LogisticRegression(C=0.001, max_iter=1000))]),
"Decision Tree (depth=20)\n(overfits)": Pipeline([("m", DecisionTreeClassifier(max_depth=20, random_state=42))]),
"Random Forest\n(well-tuned)": Pipeline([("m", RandomForestClassifier(n_estimators=100, random_state=42))]),
}
fig, axes = plt.subplots(1, 3, figsize=(18, 5))
train_sizes = np.linspace(0.1, 1.0, 20)
cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
for ax, (name, model) in zip(axes, models_to_diagnose.items()):
train_sz, train_sc, val_sc = learning_curve(
model, X, y,
train_sizes=train_sizes,
cv=cv,
scoring="accuracy",
n_jobs=-1,
)
# Plot mean +/- 1 std
ax.plot(train_sz, train_sc.mean(axis=1), "b-o", label="Train", markersize=4, linewidth=2)
ax.plot(train_sz, val_sc.mean(axis=1), "r-o", label="Validation", markersize=4, linewidth=2)
ax.fill_between(train_sz, train_sc.mean(1)-train_sc.std(1), train_sc.mean(1)+train_sc.std(1), alpha=0.15, color="blue")
ax.fill_between(train_sz, val_sc.mean(1)-val_sc.std(1), val_sc.mean(1)+val_sc.std(1), alpha=0.15, color="red")
ax.set_xlabel("Training Set Size")
ax.set_ylabel("Accuracy")
ax.set_title(name)
ax.legend(loc="lower right")
ax.set_ylim(0.7, 1.01)
# Annotate the gap
final_gap = train_sc.mean(1)[-1] - val_sc.mean(1)[-1]
diagnosis = "OVERFITTING" if final_gap > 0.05 else ("UNDERFITTING" if val_sc.mean(1)[-1] < 0.85 else "GOOD")
ax.text(0.05, 0.05, f"Gap={final_gap:.3f}\n{diagnosis}", transform=ax.transAxes, fontsize=9,
bbox=dict(boxstyle="round", facecolor="wheat", alpha=0.5))
plt.suptitle("Learning Curves: Train (blue) vs Validation (red) Accuracy", fontsize=14)
plt.tight_layout()
plt.savefig("learning_curves.png", dpi=100, bbox_inches="tight")
plt.show()
# WHAT EACH PATTERN MEANS
patterns = {
"Both curves low, converging": "UNDERFITTING -> use more complex model or better features",
"Train high, val low, large gap": "OVERFITTING -> add regularization, reduce complexity, add data",
"Both curves converging at high score": "GOOD FIT -> need more data only if val score still not high enough",
"Validation improves as data grows": "Add MORE DATA -> diminishing returns, need exponentially more data",
}
print("Learning curve pattern diagnosis:")
for pattern, action in patterns.items():
print(f" {pattern}: {action}")Tip
Tip
Practice Learning Curves Diagnosing Overfitting Underfitting in small, isolated examples before integrating into larger projects. Breaking concepts into small experiments builds genuine understanding faster than reading alone.
Underfitting = too simple. Overfitting = memorized training data. Balance with cross-validation.
Practice Task
Note
Practice Task — (1) Write a working example of Learning Curves Diagnosing Overfitting Underfitting from scratch without looking at notes. (2) Modify it to handle an edge case (empty input, null value, or error state). (3) Share your solution in the Priygop community for feedback.
Quick Quiz
Common Mistake
Warning
A common mistake with Learning Curves Diagnosing Overfitting Underfitting is skipping edge case testing — empty inputs, null values, and unexpected data types. Always validate boundary conditions to write robust, production-ready ml code.