The LLM Unlearning repository is an open-source project dedicated to the concept of unlearning in Large Language Models (LLMs). It aims to address concerns about data privacy and ethical AI by exploring and implementing unlearning techniques that allow models to forget unwanted or sensitive data. This ensures that AI models comply with privacy.
privacy-preserving-machine-learning ai-privacy trustworthy-ai llm-unlearning exact-unlearning ai-transparency efficient-unlearning approximate-unlearning unlearning-framework data-unlearning data-forgetting
- Updated
Nov 14, 2025 - Python