Cross-Validation — Reliable Performance Estimation
A single train-test split gives unreliable performance estimates — you might get a lucky or unlucky split. K-fold cross-validation uses all your data for both training and evaluation: split into K folds, train on K-1, evaluate on 1, repeat K times. Average the scores. Stratified K-fold maintains class proportions in each fold. Time-series data requires temporal splits — never shuffle time-series.
K-Fold, Stratified, and TimeSeriesSplit
import numpy as np
import pandas as pd
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import (KFold, StratifiedKFold, cross_val_score,
cross_validate, TimeSeriesSplit, leave_one_out)
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.metrics import make_scorer, roc_auc_score, f1_score
cancer = load_breast_cancer()
X, y = cancer.data, cancer.target
model = Pipeline([
("scaler", StandardScaler()),
("clf", RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)),
])
# SINGLE SPLIT vs CROSS-VALIDATION VARIABILITY
from sklearn.model_selection import train_test_split
single_split_scores = []
for seed in range(10):
X_tr, X_te, y_tr, y_te = train_test_split(X, y, test_size=0.2, random_state=seed)
model.fit(X_tr, y_tr)
single_split_scores.append(model.score(X_te, y_te))
cv_scores = cross_val_score(model, X, y, cv=10, scoring="accuracy")
print(f"Single split (10 different seeds): mean={np.mean(single_split_scores):.4f}, std={np.std(single_split_scores):.4f}, range=[{min(single_split_scores):.4f}, {max(single_split_scores):.4f}]")
print(f"10-fold CV: mean={cv_scores.mean():.4f}, std={cv_scores.std():.4f}, range=[{cv_scores.min():.4f}, {cv_scores.max():.4f}]")
print(" CV gives a more stable estimate using ALL data")
# K-FOLD VARIANTS
print("\nCross-validation strategies:")
cv_strategies = {
"KFold(k=5)": KFold(n_splits=5, shuffle=True, random_state=42),
"StratifiedKFold(k=5)": StratifiedKFold(n_splits=5, shuffle=True, random_state=42),
"KFold(k=10)": KFold(n_splits=10, shuffle=True, random_state=42),
"StratifiedKFold(k=10)": StratifiedKFold(n_splits=10, shuffle=True, random_state=42),
}
for name, cv in cv_strategies.items():
scores = cross_val_score(model, X, y, cv=cv, scoring="roc_auc")
print(f" {name:28s}: AUC={scores.mean():.4f} +/-{scores.std():.4f}")
# MULTIPLE METRICS AT ONCE
scorers = {
"accuracy": "accuracy",
"roc_auc": make_scorer(roc_auc_score),
"f1": make_scorer(f1_score),
}
cv_results = cross_validate(model, X, y, cv=StratifiedKFold(5, shuffle=True, random_state=42),
scoring=scorers, return_train_score=True)
print("\nMulti-metric cross-validation:")
for metric in ["accuracy", "roc_auc", "f1"]:
train = cv_results[f"train_{metric}"].mean()
test = cv_results[f"test_{metric}"].mean()
print(f" {metric:12s}: train={train:.4f} | val={test:.4f} | gap={train-test:.4f}")
# TIME-SERIES SPLIT -- never shuffle temporal data!
print("\nTimeSeriesSplit (for time-series data):")
tscv = TimeSeriesSplit(n_splits=5)
ts_scores = cross_val_score(model, X, y, cv=tscv, scoring="accuracy")
for i, score in enumerate(ts_scores, 1):
print(f" Fold {i}: {score:.4f} (always trains on past, tests on future)")Tip
Tip
Practice CrossValidation Reliable Performance Estimation in small, isolated examples before integrating into larger projects. Breaking concepts into small experiments builds genuine understanding faster than reading alone.
Stratified K-Fold. Watch leakage.
Practice Task
Note
Practice Task — (1) Write a working example of CrossValidation Reliable Performance Estimation from scratch without looking at notes. (2) Modify it to handle an edge case (empty input, null value, or error state). (3) Share your solution in the Priygop community for feedback.
Quick Quiz
Common Mistake
Warning
A common mistake with CrossValidation Reliable Performance Estimation is skipping edge case testing — empty inputs, null values, and unexpected data types. Always validate boundary conditions to write robust, production-ready ml code.