Features
- Tracing
- Evaluation
- Prompt Engineering
- Datasets & Experiments
Quick Starts
Running Phoenix for the first time? Select a quick start below.Send Traces From Your App
See what’s happening inside your LLM application with distributed tracing
Measure Performance with Evaluations
Measure quality with LLM-as-a-judge and custom evaluators
Iterate on Your Prompts
Experiment with prompts, compare models, and version your work
Optimize Your App with Experiments
Test your application systematically and track performance over time
Next Steps
The best next step is to start using Phoenix. Start with a quickstart to send data into Phoenix, then build from there. See the Quickstart Overview for more information about what you’ll build.Other Resources
Integrations
Add instrumentation for OpenAI, LangChain, LlamaIndex, and more
Self-Host
Deploy Phoenix on Docker, Kubernetes, or your cloud of choice
Cookbooks
Example notebooks for tracing, evals, RAG analysis, and more
Community
Join the Phoenix Slack to ask questions and connect with developers

