TensorBoard & W&B — Experiment Tracking
Logging loss curves and comparing experiments is non-negotiable in serious AI development. TensorBoard comes with PyTorch. Weights & Biases (W&B) is the industry standard — it tracks metrics, hyperparameters, gradients, model checkpoints, and lets you compare runs with one line of code.
Experiment Tracking with W&B
# pip install wandb tensorboard
import torch
import wandb
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# WEIGHTS & BIASES (W&B) — industry standard
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# Initialize run (creates a new experiment in W&B dashboard)
wandb.init(
project="ai-course-experiments",
name="resnet50-flowers-baseline",
config={
"architecture": "resnet50",
"learning_rate": 1e-3,
"batch_size": 64,
"n_epochs": 30,
"n_classes": 5,
"augmentation": "random_flip_crop",
}
)
# Log metrics every step
def training_loop_with_wandb(model, train_loader, val_loader, n_epochs=30):
optimizer = torch.optim.AdamW(model.parameters(), lr=wandb.config.learning_rate)
loss_fn = torch.nn.CrossEntropyLoss()
for epoch in range(n_epochs):
model.train()
train_loss = 0
for X, y in train_loader:
optimizer.zero_grad()
loss = loss_fn(model(X), y)
loss.backward()
optimizer.step()
train_loss += loss.item()
# Log every batch
wandb.log({"train/batch_loss": loss.item()})
# Log per-epoch metrics
wandb.log({
"train/epoch_loss": train_loss / len(train_loader),
"epoch": epoch,
"lr": optimizer.param_groups[0]["lr"],
})
# Save model artifact to W&B
wandb.save("best_model.pt")
# Watch gradients (log gradient histograms automatically)
# wandb.watch(model, log="all", log_freq=100)
wandb.finish()
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# TENSORBOARD — built into PyTorch
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter("runs/experiment_1")
for epoch in range(10):
train_loss = 0.5 - epoch * 0.04 # simulated
val_loss = 0.6 - epoch * 0.035
writer.add_scalar("Loss/train", train_loss, epoch)
writer.add_scalar("Loss/val", val_loss, epoch)
writer.add_scalar("LR", 1e-3 * (0.9 ** epoch), epoch)
# Add model graph to TensorBoard
import torch.nn as nn
model = nn.Linear(10, 2)
dummy = torch.randn(1, 10)
writer.add_graph(model, dummy)
writer.close()
# Launch TensorBoard:
# tensorboard --logdir runs/
# Then open: http://localhost:6006
print("Experiment tracking setup complete!")
print("W&B dashboard: https://wandb.ai/your-username/ai-course-experiments")Tip
Tip
Practice TensorBoard WB Experiment Tracking in small, isolated examples before integrating into larger projects. Breaking concepts into small experiments builds genuine understanding faster than reading alone.
Technical diagram.
Practice Task
Note
Practice Task — (1) Write a working example of TensorBoard WB Experiment Tracking from scratch without looking at notes. (2) Modify it to handle an edge case (empty input, null value, or error state). (3) Share your solution in the Priygop community for feedback.
Quick Quiz
Common Mistake
Warning
A common mistake with TensorBoard WB Experiment Tracking is skipping edge case testing — empty inputs, null values, and unexpected data types. Always validate boundary conditions to write robust, production-ready ai code.