Skip to content

the-irritater/Neural-Network-Regularization-Study

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Network Regularization Techniques Analysis

Problem Statement

Machine learning models often suffer from overfitting, leading to poor performance on unseen data. This project evaluates different regularization techniques to improve model generalization.


Objectives

  • Analyze overfitting in neural networks
  • Compare different regularization techniques
  • Evaluate model performance improvements

Tools Used

  • Python
  • TensorFlow / Keras
  • Scikit-learn

Techniques Applied

  • L1 Regularization
  • L2 Regularization
  • Dropout
  • Batch Normalization

Key Insights

  • Dropout significantly reduces overfitting
  • L2 regularization stabilizes training
  • Proper regularization improves model performance

Business Impact

  • Helps build reliable ML models for production
  • Reduces risk of inaccurate predictions

Author

Sanman Kadam

About

Comparative study of neural network regularization techniques (L1, L2, Dropout, BatchNorm) applied to regression with outliers and TF-IDF based spam classification.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages