The bias-variance tradeoff is fundamental to understanding model complexity in machine learning. When fitting a polynomial to data, we face a critical choice: simple models (low degree) may underfit, while complex models (high degree) may overfit.

This demo uses synthetic data (12 training points, 18 test points) with a complex underlying pattern combining polynomial and trigonometric components. You'll clearly see: degree 1 severely underfits (MSE ~8,500), degree 4 finds the sweet spot (MSE ~1,200), and degree 10 overfits dramatically (train MSE = 17, test MSE = 5,096!).
• Adjust polynomial degree slider (1-10) to see how complexity affects train vs test error
• Check "Show Squared Errors" to visualize MSE—area of squares = prediction errors
• Click "Toggle Test Data" to show/hide test points
• Watch the error chart: Test MSE drops sharply, then explodes at degree 8+

Why does degree 4 minimize test error? Why does train error reach 0 but test error explodes?
Degree 1: Straight line—severe underfitting (MSE ~8,500)
Degree 2-3: Still underfitting (MSE ~2,900 → ~260)
Degree 4: ✓ SWEET SPOT (Test MSE ~1,200)—best generalization!
Degrees 5-8: Slight overfitting (test MSE ~1,200-1,400)
Degrees 9-10: Clear overfitting—train MSE drops to 17, test MSE rises to ~5,000!
Balanced dataset: 12 training, 18 test points show realistic bias-variance progression
1
y=27.95+20.17x
📐
Squared Errors
🔬
Test Data
MSE=8,540.6
Training Set

Developed by Kevin Yu & Panagiotis Angeloudis