Multi-linear regression extends simple linear regression to handle multiple input features simultaneously. Instead of predicting from just one variable, we can use several features to make more accurate predictions.
In civil engineering, this is particularly valuable for projects like predicting housing costs from multiple property characteristics, estimating structural loads from various design parameters, or forecasting traffic flow from multiple environmental factors.
• Click transformation boxes below each feature to activate/deactivate them (blue = active). Each feature can have multiple transformations active simultaneously.
• By default, linear () is selected for all features. Try adding polynomial (, ) or inverse (, ) transformations to improve predictions.
• The model automatically retrains when you toggle any transformation. Watch the metrics (MAE, MSE, R²) to see how different transformations affect performance.
• The scatter plot shows actual vs predicted prices—points closer to the diagonal line indicate better predictions.
Can you find a feature engineering strategy that maximizes R² beyond the linear baseline?
Understanding the Results:
• R² values: 0.0-0.3 (poor), 0.3-0.7 (moderate), 0.7+ (good fit)
• MSE (Mean Squared Error): Lower values indicate better fit - measures average squared differences
• MAE (Mean Absolute Error): More intuitive than MSE - average absolute prediction error
• Feature importance: Notice which features contribute most to prediction accuracy
• Overfitting vs Underfitting: Too few features may underfit; all features don't always improve performance
L1 Regularization (Lasso Regression)
Lasso adds an L1 penalty term to the loss function that encourages coefficient sparsity. As regularization strength increases, more coefficients are driven to exactly zero, automatically performing feature selection.
• All features active: In this mode, all feature transformations are enabled
• Adjust λ (lambda): Use the strength buttons to control regularization intensity
• Coefficient visualization: Feature panel shows coefficient values through color and saturation
- Green: Positive coefficients (saturation = magnitude)
- Red: Negative coefficients (saturation = magnitude)
- Grey: Near-zero coefficients (driven to zero by regularization)
Observe how higher λ values create sparser models by eliminating less important features!