Welcome to an end-to-end machine learning dashboard app built for model evaluation and deployment β built for a hackathon, designed for production!
πΉ A powerful and stylish Streamlit dashboard for comparing multiple classification models.
πΉ Real-time model testing with uploaded CSVs.
πΉ Fully tuned pipelines, metrics analysis, and interactive visualizations.
The purpose of this project is to:
- Train and evaluate multiple classification algorithms
- Use cross-validation and hyperparameter tuning for optimization
- Compare models based on metrics like:
- Accuracy
- AUC Score
- F1-Score
- Precision, Recall, Specificity
- Visualize and interpret results through an interactive Streamlit dashboard
- Enable end-users to upload their own CSV and get predictions from tuned models.
- β Logistic Regression
- β Decision Tree Classifier
- β Random Forest Classifier
- β Support Vector Machine
- β XGBoost / LightGBM
- β Hyperparameter Tuning (Grid Search)
- β Feature Importance Charts
- β Dynamic Bar Graphs (Plotly)
- β Glassmorphic Streamlit UI
- β Upload CSV to Test Models Live
- β Auto-Pickle & Save All Models
- β Responsive layout with dark mode and Fira Code font
Dataset is sourced from Kaggle. After selection:
- Null values handled
- Categorical features encoded
- Numeric features scaled
- Train/test split applied with stratification
Example input format is available in example_input.csv.
git clone https://github.com/yourusername/your-repo-name.git cd your-repo-name python -m venv venv source venv/bin/activate # or venv\Scripts\activate on Windows pip install -r requirements.txtstreamlit run app.pyMake sure
models/folder exists with pickled files
- Used
Pipeline()fromsklearnfor each model - Feature encoding, scaling, and classification in one step
| Metric | Description |
|---|---|
| Accuracy | % Correct predictions |
| AUC Score | Area under ROC curve |
| F1-Score | Harmonic mean of precision & recall |
| Precision | True Positives / Predicted Positives |
| Recall | True Positives / Actual Positives |
| Specificity | True Negatives / Actual Negatives |
- GridSearchCV for exhaustive tuning
- Best parameters auto-selected for each model
- Theme toggle: Light β¨ / Dark π
- Plotly-based interactive charts
- Hover effects, rounded corners, and modern Fira Code font
- Upload
.csvfile to test any tuned model - Performance chart comparison between Accuracy and AUC
Easily deploy on Streamlit Cloud:
https://share.streamlit.io/yourusername/your-repo-name/main/app.pyYou can also deploy via:
- Hugging Face Spaces
- Render.com
- Local containerized environments (Docker)
- Run
app.py - Upload a CSV following the
example_input.csvformat - Select any model from the sidebar dropdown
- Visualize predictions, performance, and insights
| Dashboard View | Feature Importances |
|---|---|
![]() | ![]() |
| Model | Accuracy | AUC Score | F1 Score | Recall | Specificity |
|---|---|---|---|---|---|
| Random Forest | 0.92 | 0.94 | 0.91 | 0.90 | 0.93 |
| XGBoost | 0.91 | 0.95 | 0.90 | 0.89 | 0.92 |
MIT License Β© 2025 Debangan Ghosh
Star β the repo if you liked the project. Contributions, feedback and forks are always welcome!
Connect with me on LinkedIn or drop an issue if you want to collaborate!




