Loading…
Train, evaluate, and compare 9 supervised classification models on your data
9
Algorithms Available
Train/Test
Stratified Splitting
15+
Metrics & Charts
CSV / XLSX
Upload Your Data
Upload a CSV or Excel file with features and a target column. Non-numeric columns are auto-encoded.
Choose from 9 classifiers and configure parameters like train/test split, regularization, and more.
Get confusion matrix, ROC curves, feature importance, precision/recall and many more metrics.
Choose an algorithm based on your data characteristics and interpretability needs
A fundamental classification algorithm that models the probability of class membership using a logistic function. Excellent interpretability through coefficients and calibrated probabilities.
Key Features
Best for: Binary/multiclass with interpretable results
Support Vector Machine finds the optimal hyperplane that maximizes the margin between classes. With kernel tricks, it can handle non-linear decision boundaries.
Key Features
Best for: High-dimensional data, text classification
Builds a tree of decision rules that split data by features. Highly interpretable — you can trace any prediction back to a series of simple if-then conditions.
Key Features
Best for: When interpretability is critical
Combines many decision trees trained on random subsets of data and features. Reduces overfitting through bagging and provides robust predictions.
Key Features
Best for: General-purpose, robust classification
Builds trees sequentially, with each new tree correcting errors from previous ones. Achieves high accuracy through gradient-based optimization.
Key Features
Best for: Structured/tabular data competitions
An optimized implementation of gradient boosting with regularization, parallel processing, and built-in handling of missing values. Industry standard for tabular data.
Key Features
Best for: Production ML, Kaggle competitions
Classifies new data points based on the majority vote of their k nearest neighbors. Non-parametric and intuitive — no training phase, just distance computation.
Key Features
Best for: Small to medium datasets, recommendation systems
Multi-Layer Perceptron — a feedforward neural network with configurable hidden layers. Learns complex non-linear decision boundaries through backpropagation.
Key Features
Best for: Complex patterns, image-like features
Start with Logistic Regression for a quick interpretable baseline, then try Random Forest or XGBoost for higher accuracy.