AnonTex is a privacy-first experimental LLM proxy that anonymizes Personally Identifiable Information (PII) before forwarding requests to the OpenAI Completion API. It is designed to be compatible with the /v1/chat/completions endpoint, making it a drop-in proxy with minimal integration effort.
⚠️ Note: This is an experimental project. Use with caution in production environments.
- Acts as a transparent proxy for OpenAI's chat completion endpoint.
- Automatically anonymizes user input using PII detection.
- Redis-backed for entity management and fast caching.
Install via pip:
pip install anontex✅ Note: Redis is a required external dependency for caching and PII management. Make sure you have Redis running locally or remotely.
pip install anontex[transformers]Once installed and configured, AnonTex runs a proxy server compatible with OpenAI’s Chat Completion API.
curl --request POST \ --url http://localhost:8000/v1/chat/completions \ --header 'Authorization: Bearer YOUR-OPENAI-API-KEY' \ --header 'Content-Type: application/json' \ --data '{ "model": "gpt-4o-mini", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello! My name is John Smith" } ] }'Start the proxy via CLI:
anontex run--host: Server host (default:0.0.0.0)--port: Server port (default:8000)--config: Path to configuration file (default:spacyengine configs)--log-level: Logging level (default:info)
You can pass settings via a YAML config file. Read the following documentation to customize the config file.
This project uses the
presidio-analyzerPython package as an entity detector. You can use the default config file without specifying a custom file or point to apresidio-analyzersupported config file.
Additional configurations can be done via environment variables in a .env file. If .env is not set, default values will be used. Read the following documentation to customize the .env file.
You can deploy AnonTex with Docker using Docker Compose.
git clone https://github.com/ChamathKB/AnonTexdocker compose up -d- ❌ No support for multi-turn PII tracking (PII memory is per-message only).
- 🔗 Only supports OpenAI API compatible endpoints.
- 🌐 Limited language support (primarily English).
- 📈 Planned support for:
- Multi-turn entity memory
- Custom anonymization rules
- Model switching and vendor abstraction
- Analytics & tracing integration
Pull requests are welcome! For major changes, please open an issue first to discuss what you’d like to change.
This project is licensed under the Apache 2.0 License.