Loading…
Predict continuous values with 11 regression models — from linear baselines to gradient boosting
11
Algorithms Available
R² / MAE
Metric Suite
Residuals
Diagnostic Plots
CSV / XLSX
Upload Your Data
Upload a CSV or Excel file with features and a continuous target column.
Choose from 11 regressors and configure parameters like regularization, tree depth, and more.
Get R², MAE, RMSE, residual plots, feature importance, and predicted vs actual charts.
Choose an algorithm based on your data characteristics and accuracy needs
Ordinary Least Squares — the foundational regression algorithm. Fits a linear relationship between features and target by minimizing squared residuals.
Key Features
Best for: Baseline, linear relationships
Adds L2 penalty to OLS to shrink coefficients and reduce overfitting. Keeps all features but reduces their impact — ideal when features are correlated.
Key Features
Best for: Many correlated features
Applies L1 penalty which can zero out coefficients entirely, performing automatic feature selection. Produces sparse, interpretable models.
Key Features
Best for: High-dimensional data, feature selection
Combines Lasso and Ridge penalties for the best of both worlds. Handles correlated features while still performing feature selection.
Key Features
Best for: Correlated features with sparsity
Predicts continuous values by learning decision rules from features. Produces interpretable tree structures but can overfit without pruning.
Key Features
Best for: When interpretability is critical
Averages predictions from many decorrelated decision trees. Robust to outliers and overfitting through bagging and feature randomization.
Key Features
Best for: General-purpose regression
Sequentially builds trees to correct residual errors. Achieves high accuracy with careful learning rate and tree depth tuning.
Key Features
Best for: Structured/tabular data
Industry-standard gradient boosting with built-in regularization, parallel processing, and missing value handling. Top performer on tabular data.
Key Features
Best for: Production ML, competitions
Applies SVM principles to regression by fitting data within an epsilon-tube. With kernel functions, it can model complex non-linear relationships.
Key Features
Best for: Non-linear, medium datasets
Predicts by averaging the target values of the k nearest neighbors. Simple, non-parametric, and effective for locally smooth functions.
Key Features
Best for: Locally smooth functions, small data
Start with Linear Regression for a baseline, then try Random Forest or XGBoost for better accuracy.