Releases: RightNow-AI/openfang
Releases · RightNow-AI/openfang
v0.5.1 — Community Contributions
9 community PRs merged after strict review (24 PRs reviewed, 11 rejected, 4 closed).
Fixes
- Dashboard settings page loading state fix (#750)
- KaTeX loaded on demand to prevent first-paint blocking (#748)
- Provider model normalization — display names resolve through catalog (#714)
- Invisible approval requests now visible with history, badge, and polling (#713)
- Matrix
auto_accept_invitesnow configurable, defaults to false (security) (#711)
Dependencies
- docker/build-push-action 6 → 7 (#741)
- docker/setup-buildx-action 3 → 4 (#740)
- roxmltree 0.20 → 0.21 (#744)
- zip 2.4 → 4.6 (#742)
Full diff: v0.5.0...v0.5.1
v0.5.0 — Milestone Release
29 bugs fixed, 6 features shipped, 100+ PRs reviewed, 65+ issues resolved.
Features
- Image generation pipeline (DALL-E/GPT-Image)
- WeCom channel adapter
- Docker sandbox runtimes
- Shell skill runtime
- Slack unfurl links support
- Release-fast build profile
Improvements
- Channel agent re-resolution
- Stable hand agent IDs
- Async session save
- Vault wiring for credentials
- Telegram formatting improvements
- Mastodon polling fix
- Chromium no-sandbox root support
- Tool error guidance in agent loop
- Agent rename fix
- Codex id_token support
Community
- Community docs and fixes (multiple rounds)
- WhatsApp setup documentation
- CI action bumps
- Docker build args
- Lockfile sync
- Docs link fixes
Full diff: v0.4.3...v0.5.0
v0.4.9
v0.4.9
Bug Fixed
- Image pipeline (#686): REST API and WebSocket now pass image attachments as
content_blocksdirectly to the LLM viasend_message_with_handle_and_blocks()/send_message_streaming(). Previously images were injected as a separate session message and never reached vision models in the current turn. All 3 API entry points (REST, WebSocket, channels) now use the same flow.
Docs
- Added community troubleshooting FAQ: Docker setup, Caddy basicauth, embedding model config, email allowed_senders, Z.AI/GLM-5 config, Kimi 2.5, OpenRouter free models, Claude Code integration, trader hand permissions, multiple Telegram bots workaround.
Full changelog since v0.4.4
26 bugs fixed, 6 features shipped, 100+ PRs reviewed, 65+ issues resolved across v0.4.4–v0.4.9.
v0.4.8
v0.4.8
Bugs Fixed
- Fix HandCategory TOML parse error — added Finance + catch-all Other variant (#717)
- Fix LINE token detection heuristic — long tokens (>80 chars) recognized as direct values (#729)
- Fix General Assistant max_iterations too low — bumped from 50 to 100 (#719)
- Fix knowledge_query SQL parameter binding mismatch (#638)
- Fix WhatsApp Cloud API silently swallowing send errors (#707)
- Fix dashboard provider dropdown missing local providers (#683)
Previous (v0.4.5–v0.4.7)
- Fix Gemini infinite loop on Thinking-only responses (#704)
- Fix tool_blocklist not detected on daemon restart (#666)
- Fix MCP credentials from .env/vault (#660)
- Fix image base64 compaction storms (#648)
- Fix phantom action hallucination (#688)
- Fix desktop app .env loading (#687)
- Fix duplicate sessions (#651)
- Fix Anthropic null tool_use input (#636)
- Fix temperature for reasoning models (#640)
- Fix OpenRouter prefix on fallbacks (#630)
- Fix streaming metering persistence (#627)
- Fix MCP dash names (#616)
- Fix deepseek-reasoner multi-turn (#618)
- Fix NO_REPLY leak to channels (#614)
- Fix skill install button (#625)
- Fix cron delivery (#601)
Features
- Azure OpenAI provider (#631)
- LaTeX rendering in chat (#622)
- PWA support (#621)
- WeCom channel adapter (#629)
- Shell/Bash skill runtime (#624)
- DingTalk Stream adapter (#353)
- Feishu/Lark unified adapter (#329)
- Parakeet MLX speech-to-text (#607)
- Codex GPT-5.4 (#608)
- 100+ community PRs reviewed and merged
v0.4.7
Bugs fixed: - Fix WhatsApp Cloud API silently swallowing errors on Image/File/Location sends (#707) - Fix dashboard provider dropdown hardcoded — now includes all 14 cloud + 4 local providers (#683) - Fix knowledge_query SQL parameter binding mismatch — queries now return matching entities (#638) Previous (v0.4.6): - Fix Gemini infinite loop on Thinking-only responses (#704) - Fix tool_blocklist not detected on daemon restart (#666) - Fix MCP servers not receiving credentials from .env/vault (#660) - Fix image base64 causing compaction storms (#648) - Fix phantom action hallucination (#688) - Fix desktop app not loading .env files (#687) - Fix duplicate sessions from session ID mismatch (#651) - Fix Anthropic null tool_use input for parameterless calls (#636) - Fix temperature rejection for reasoning models (#640)
v0.4.6
Bugs fixed: - Fix Gemini infinite loop on Thinking-only responses (#704) - Fix tool_blocklist not detected on daemon restart (#666) - Fix MCP servers not receiving credentials from .env/vault (#660) - Fix image base64 causing compaction storms (#648) - Fix phantom action hallucination — LLM can't claim completion without tools (#688) - Fix desktop app not loading .env files (#687) - Fix duplicate sessions from session ID mismatch (#651) - Fix Anthropic null tool_use input for parameterless calls (#636) - Fix temperature rejection for reasoning models (GPT-5-nano, DeepSeek-R1) (#640) Closed: - 20+ issues triaged, responded to, and resolved
v0.4.5
Bugs fixed: - Fix infinite loop when Gemini returns only Thinking blocks (#704) - Fix tool_blocklist not detected on daemon restart (#666) - Fix MCP servers not receiving credentials from .env/vault (#660) - Fix image base64 causing compaction storms — strip after LLM processes (#648) - Fix phantom action hallucination — detect when LLM claims completion without calling tools (#688) - Fix OpenRouter model prefix not stripped on fallback chains (#630) - Fix streaming path missing metering.record() — token usage now persisted (#627) - Fix MCP server names with dashes not resolving (#616) - Fix deepseek-reasoner multi-turn conversations (#618) - Fix silent/NO_REPLY responses leaking to channels (#614) - Fix installed skills still showing Install button (#625) - Fix cron channel delivery was a no-op — all 3 delivery variants work (#601) Features: - Azure OpenAI provider with deployment-based URLs and api-key header (#631) - LaTeX rendering in chat via KaTeX (#622) - Progressive Web App support — manifest.json + service worker (#621) - WeCom (WeChat Work) channel adapter with AES-256-CBC encryption (#629) - Shell/Bash skill runtime (#624) - DingTalk Stream mode adapter (#353) - Feishu/Lark unified adapter with region toggle (#329) - Stable hand agent IDs via UUID v5 (#520) - Telegram reply-to-message context (#567) - Slack app_mention event support (#540) - Suppress error responses on broadcast channels (#536) - Dashboard provider editing (#600) - Codex GPT-5.4 model catalog (#608) - Chromium --no-sandbox for root (#394) - Tool error guidance to prevent fabricated results (#424) - Channel agent re-resolution by name (#626) - Slack unfurl_links config (#623) - Docker build args for faster dev builds (#541) - Parakeet MLX local speech-to-text (#607) - OnceLock for peer registry (removes unsafe) (#526 partial) Community: - 100+ PRs reviewed and processed - Telegram markdown block-level parser (#595) - WhatsApp group reply routing (#604) - Mastodon notification polling fix (#538) - Async session save for 1-CPU deployments (#534)
v0.4.4
- Wire credential vault into main flows (dashboard save, CLI save, kernel boot) — API keys now stored in AES-256-GCM vault with dual-write to secrets.env for backward compat - Fix cron channel delivery that was a no-op — Channel, LastChannel, and Webhook variants all deliver now - Propagate cron delivery failures to scheduler (one-shot jobs not removed on failure) - Add credential resolver (vault → dotenv → env var) to kernel for unified secret resolution - Add remove_from_vault() to CredentialResolver - Bump to v0.4.4
v0.4.3
Bug Fixes
- Open links in new tab (#612): External links in markdown agent responses now open in a new tab with
target="_blank". - Browser hand install (#611): Homebrew "already an App at" message now correctly recognized as success instead of failure.
- "FREE is not a valid model" (#610): Added
"free","openrouter/free", and"free-reasoning"aliases for OpenRouter free-tier models. - WhatsApp drops media (#605): Gateway now handles images, voice notes, videos, documents, and stickers with descriptive placeholders instead of silently dropping them.
- WhatsApp sender metadata (#597): Sender identity (phone, name) now flows end-to-end from API → kernel → system prompt. Agents know who sent each message.
- Config overwrite warning (#578): Web dashboard shows a warning toast when saving an API key triggers an auto-provider-switch.
- Linux libssl error (#582): OpenSSL is now statically compiled via vendored feature — no runtime libssl dependency.
Enhancements
- Minimax global URL (#576): Default URL switched to global endpoint (
api.minimax.io/v1). China users can override via[provider_urls]in config.toml.
v0.4.2
Bug Fixes
- LoopGuard poll detection (#603): Removed fragile
cmd.len() < 50heuristic. Poll detection is now purely keyword-based — long kubectl/docker commands are correctly identified. - Custom model provider display (#581): Model switcher dropdown now shows
provider:modelformat instead of just the display name. - Telegram typing indicator (#571): Typing indicator now refreshes every 4 seconds continuously during LLM processing, instead of expiring after 5 seconds.
- "No agent selected" error (#569): Agents created via API are now immediately registered in the channel router's name cache.
- Tool calls denied by approval (#537): Denial message now includes guidance to use
auto_approve = truein config.toml or--yoloflag. - WhatsApp sender metadata (#597): MessageRequest and PromptContext now have sender_id/sender_name fields for identity-aware agents.
- LLM auth boot failure (#572): Already fixed in v0.4.0 (auto-detect fallback). Commented for users on older versions.
Enhancements
- NVIDIA NIM provider (#579): Added as OpenAI-compatible provider with 5 models (nemotron-70b, llama-3.1-405b/70b, mistral-large, nemotron-4-340b). Set
NVIDIA_API_KEYto use. - YOLO mode (#573):
openfang start --yoloauto-approves all tool calls. Also configurable viaauto_approve = truein[approval]section. - Skill output persistence (#596): Tool/skill output cards in dashboard now default to expanded instead of collapsed, keeping results visible.