The Python SDK for Project David — the open source, GDPR compliant successor to the OpenAI Assistants API.
Same primitives. Every model. Your infrastructure.
Project David is a full-scale, containerized LLM orchestration platform built around the same primitives as the OpenAI Assistants API — Assistants, Threads, Messages, Runs, and Tools — but without the lock-in.
- Provider agnostic — Hyperbolic, TogetherAI, Ollama, or any OpenAI-compatible endpoint. Point at any inference provider and the platform normalizes the stream.
- Every model — hosted APIs today, raw local weights tomorrow. Bring your own model.
- Your infrastructure — fully self-hostable, open source, GDPR compliant, security audited.
- Production grade — sandboxed code execution (FireJail), multi-agent delegation, file serving with signed URLs, real-time streaming frontend.
Project Uni5 — the next milestone.
transformers, GGUF, and vLLM adapters that mean a model straight off a training run has a full orchestration platform in minutes. From the lab to enterprise grade orchestration — instantly.
| Metric | Status |
|---|---|
| Total Downloads | |
| Monthly Reach | |
| Open Source Activity | |
| Analytics | View Live Download Trends on ClickPy → |
pip install projectdavidRequirements: Python 3.10+ · A running Project David platform instance
import os from dotenv import load_dotenv from projectdavid import Entity load_dotenv() client = Entity( base_url=os.getenv("BASE_URL"), # default: http://localhost:80 api_key=os.getenv("ENTITIES_API_KEY"), ) # Create an assistant assistant = client.assistants.create_assistant( name="my_assistant", instructions="You are a helpful AI assistant.", ) # Create a thread and send a message thread = client.threads.create_thread() message = client.messages.create_message( thread_id=thread.id, role="user", content="Tell me about the latest trends in AI.", assistant_id=assistant.id, ) # Create a run run = client.runs.create_run( assistant_id=assistant.id, thread_id=thread.id, ) # Stream the response stream = client.synchronous_inference_stream stream.setup( user_id=os.getenv("ENTITIES_USER_ID"), thread_id=thread.id, assistant_id=assistant.id, message_id=message.id, run_id=run.id, api_key=os.getenv("PROVIDER_API_KEY"), ) for chunk in stream.stream_chunks( model="hyperbolic/deepseek-ai/DeepSeek-V3-0324", timeout_per_chunk=15.0, ): content = chunk.get("content", "") if content: print(content, end="", flush=True)See the Quick Start guide for the event-driven interface, tool calling, and advanced usage.
Full list of supported providers and endpoints →
Works with any OpenAI-compatible endpoint out of the box — including Ollama for fully local inference.
| Variable | Description |
|---|---|
ENTITIES_API_KEY | Your Entities API key |
ENTITIES_USER_ID | Your user ID |
BASE_URL | Platform base URL (default: http://localhost:80) |
PROVIDER_API_KEY | Your inference provider API key |
| Topic | Link |
|---|---|
| Quick Start | sdk-quick-start.md |
| Assistants | sdk-assistants.md |
| Threads | sdk-threads.md |
| Messages | sdk-messages.md |
| Runs | sdk-runs.md |
| Inference | sdk-inference.md |
| Tools | sdk-tools.md |
| Function Calls | function-calling-and-tool-execution.md |
| Code Interpreter | sdk-code-interpreter.md |
| Files | sdk-files.md |
| Vector Store | sdk-vector-store.md |
Full hosted docs coming at
docs.projectdavid.co.uk
| Repo | Description |
|---|---|
| platform | Core orchestration engine |
| entities-common | Shared utilities and validation |
| david-core | Docker orchestration layer |
| reference-frontend | Reference streaming frontend |
| entities_cook_book | Minimal tested examples — streaming, tools, search, stateful logic |