You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,4 +10,5 @@
10
10
| 0.1.1 | Prometheus metrics logging. Workers/Threads/Workers class is able to set by environments. Streaming fixes. Multi-providers for single model with default-balanced strategy. |
11
11
| 0.2.0 | Add balancing strategies: `balanced`, `weighted`, `dynamic_weighted` and `first_available` which works for streaming and non streaming requests. Included Prometheus metrics logging via `/metrics` endpoint. First stage of `llm_router_lib` library, to simply usage of `llm-router-api`. |
12
12
| 0.2.1 | Fix stream: OpenAI->Ollama, Ollama->OpenAI. Add Redis caching of availability of model providers (when using `first_available` strategy). Add `llm_router_web` module with simple flask-based frontend to manage llm-router config files. |
13
-
| 0.2.2 | Update dockerfile and requirements. Fix routing with vLLM. |
13
+
| 0.2.2 | Update dockerfile and requirements. Fix routing with vLLM. |
14
+
| 0.2.3 | New web configurator: Handling projects, configs for each user separately. |
0 commit comments