perf: fix renderer heap exhaustion on long-running sessions#120
perf: fix renderer heap exhaustion on long-running sessions#120Psypeal wants to merge 1 commit intomatt1398:mainfrom
Conversation
The renderer crashes with exitCode 133 (V8 OOM) after ~24 hours. Root causes: full conversation re-creation on every refresh, O(N²) injection accumulation, and raw session data stored in memory. Fixes: - Strip detail.chunks/processes after transformation (slimDetail) - Incremental conversation update — reuse unchanged ChatItem objects, only re-transform the tail (O(delta) vs O(N) per refresh) - Fingerprint-based skip when session content is unchanged - Adaptive debounce scaled per-session (60s for >1000 AI groups down to 150ms default); uses target session's totalAIGroups - batchAsync for concurrency-limited IPC calls - Phase 1/Phase 2 split: render conversation immediately, compute context stats async - O(N²) → O(N) injection accumulation in context/claudeMd trackers - Precomputed accumulatedCounts on ContextStats - Export re-fetches full detail on demand (MoreMenu) - Lower VIRTUALIZATION_THRESHOLD 120 → 30 (virtualize sooner to keep fewer DOM nodes for large sessions) - Dynamic renderer heap: 50% of system RAM clamped to [2–4 GB] Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the performance and memory efficiency of long-running sessions, specifically targeting renderer heap exhaustion. It introduces several memory-only optimizations, such as dynamic heap sizing, incremental conversation updates, adaptive refresh debouncing, and deferred context processing. These changes aim to reduce memory allocation, prevent O(N²) operations, and ensure a smoother user experience during extended usage. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This is an excellent pull request that comprehensively addresses renderer heap exhaustion through a series of well-executed performance optimizations. The changes demonstrate a deep understanding of the performance bottlenecks and apply effective strategies to mitigate them.
The introduction of incremental conversation updates, fingerprint-based skipping, and adaptive debouncing will significantly reduce unnecessary processing and memory churn, especially in long-running sessions. The memory-saving techniques, such as stripping raw chunk data after transformation and fixing O(N²) complexities in context tracking, are crucial for improving the application's stability over time.
I was particularly impressed by the Phase 1/Phase 2 split for session loading, which will greatly enhance perceived performance by rendering the conversation immediately while deferring expensive statistics computation. The addition of the batchAsync utility for concurrent IPC calls and the dynamic renderer heap sizing are also smart improvements for robust resource management.
The code is high-quality, well-commented, and the changes are thoughtfully implemented with attention to details like immutability. Overall, this is a fantastic contribution that will substantially improve the performance and reliability of the application. I found no issues that meet the severity threshold for comments.
📝 WalkthroughWalkthroughThis PR introduces performance optimizations and refactors state management across the application. Changes include dynamic renderer heap limit configuration, adaptive session refresh debouncing, two-phase session detail processing with fingerprinting and batching, refactored context accumulation logic, incremental conversation updates, and UI prop changes to decouple session export from stored session details. Changes
Possibly Related PRs
Suggested Labels
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can enforce grammar and style rules using `languagetool`.Configure the |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/renderer/store/slices/sessionDetailSlice.ts (1)
640-708:⚠️ Potential issue | 🟠 MajorRefresh the derived context maps alongside
conversation.This path swaps in
newConversation/slimDetailfor the session and every open tab, but it never clears or recomputessessionClaudeMdStats,sessionContextStats, orsessionPhaseInfo. After a new AI group or compaction arrives, those maps no longer match the current conversation IDs, so the token/context panels show stale or missing data until a full fetch. The clean fix is to reuse the same Phase 2 pipeline fromfetchSessionDetail()here as well.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/store/slices/sessionDetailSlice.ts` around lines 640 - 708, The update replaces conversation/slimDetail for the session and tabs but doesn't refresh derived maps sessionClaudeMdStats, sessionContextStats, or sessionPhaseInfo, leaving token/context panels stale; run the same Phase 2 pipeline used in fetchSessionDetail() after swapping in newConversation/slimDetail to recompute those maps and write them into the store. Concretely: after creating slimDetail/newConversation and before set({ tabSessionData: latestTabSessionData }) (or inside the same set callback), invoke the internal Phase 2 logic from fetchSessionDetail() (or extract it into a helper like computeSessionDerivedMaps(sessionId, conversation, slimDetail)) to produce updated sessionClaudeMdStats, sessionContextStats, and sessionPhaseInfo entries for this sessionId, then include those updated maps in the state update and in latestTabSessionData entries so per-tab views see refreshed derived data. Ensure you reference and update the same keys used in the slice (sessionClaudeMdStats, sessionContextStats, sessionPhaseInfo) and reuse fetchSessionDetail() behavior to keep logic consistent.
🧹 Nitpick comments (5)
src/main/index.ts (2)
12-23: Reorder imports to match the repo’s import ordering rule.Line 12 currently places
@shared/*imports before external packages. Please move external imports (electron,fs,os,path) above path-alias imports.Proposed diff
-import { - DEFAULT_WINDOW_HEIGHT, - DEFAULT_WINDOW_WIDTH, - DEV_SERVER_PORT, - getTrafficLightPositionForZoom, - WINDOW_ZOOM_FACTOR_CHANGED_CHANNEL, -} from '@shared/constants'; -import { createLogger } from '@shared/utils/logger'; import { app, BrowserWindow, ipcMain } from 'electron'; import { existsSync } from 'fs'; import { totalmem } from 'os'; import { join } from 'path'; +import { + DEFAULT_WINDOW_HEIGHT, + DEFAULT_WINDOW_WIDTH, + DEV_SERVER_PORT, + getTrafficLightPositionForZoom, + WINDOW_ZOOM_FACTOR_CHANGED_CHANNEL, +} from '@shared/constants'; +import { createLogger } from '@shared/utils/logger';As per coding guidelines,
**/*.{ts,tsx}should "Organize imports in order: external packages, path aliases (@main,@renderer,@shared,@preload), then relative imports".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/index.ts` around lines 12 - 23, The import ordering in this file is wrong: move external package imports (electron's app, BrowserWindow, ipcMain; fs.existsSync; os.totalmem; path.join) above the path-alias imports (symbols DEFAULT_WINDOW_HEIGHT, DEFAULT_WINDOW_WIDTH, DEV_SERVER_PORT, getTrafficLightPositionForZoom, WINDOW_ZOOM_FACTOR_CHANGED_CHANNEL and createLogger) so the module list follows the repo rule (external packages first, then path aliases, then relative imports); update the import block so the same identifiers (DEFAULT_WINDOW_HEIGHT, createLogger, app, BrowserWindow, ipcMain, existsSync, totalmem, join, etc.) are preserved but reordered accordingly.
33-35: Rename heap constants to UPPER_SNAKE_CASE.Lines 33-35 introduce constant values using
totalMBandheapMB, which conflicts with the constant naming convention.Proposed diff
-const totalMB = Math.floor(totalmem() / (1024 * 1024)); -const heapMB = Math.min(4096, Math.max(2048, Math.floor(totalMB * 0.5))); -app.commandLine.appendSwitch('js-flags', `--max-old-space-size=${heapMB}`); +const TOTAL_MB = Math.floor(totalmem() / (1024 * 1024)); +const HEAP_MB = Math.min(4096, Math.max(2048, Math.floor(TOTAL_MB * 0.5))); +app.commandLine.appendSwitch('js-flags', `--max-old-space-size=${HEAP_MB}`);As per coding guidelines,
**/*.{ts,tsx}should "Use UPPER_SNAKE_CASE for constant values".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/index.ts` around lines 33 - 35, Rename the lowerCamel constants to UPPER_SNAKE_CASE: change totalMB to TOTAL_MB and heapMB to HEAP_MB, update the Math calculation assignment accordingly (e.g., const TOTAL_MB = Math.floor(totalmem() / (1024 * 1024)); const HEAP_MB = Math.min(4096, Math.max(2048, Math.floor(TOTAL_MB * 0.5)));), and propagate the new symbol HEAP_MB into the call to app.commandLine.appendSwitch('js-flags', `--max-old-space-size=${HEAP_MB}`) so all references match the naming guideline.src/renderer/components/layout/MoreMenu.tsx (1)
40-40: Prevent overlapping exports while one export is in flight.Without a loading guard, users can trigger duplicate exports by reopening the menu during an active request.
♻️ Suggested hardening
const handleExport = useCallback( (format: ExportFormat) => { + if (exportLoading) return; if (activeTab?.type !== 'session' || !activeTab.projectId || !activeTab.sessionId) return; const { projectId, sessionId } = activeTab; setIsOpen(false); setExportLoading(true); @@ - [activeTab] + [activeTab, exportLoading] );Also applies to: 74-90, 113-131
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/layout/MoreMenu.tsx` at line 40, The MoreMenu component's export handlers can trigger duplicate exports because exportLoading is not used to guard actions; update each export function (e.g., exportSelectedConversations, exportAllConversations, and any handlers referenced around lines ~74-90 and ~113-131) to check exportLoading at the top and return early if true, wrap the async export logic with setExportLoading(true) before the try and setExportLoading(false) in a finally block, and also use exportLoading to disable the export-related menu items/buttons (or prevent menu reopen) while the request is in flight so only one export can run at a time.src/renderer/store/index.ts (1)
87-103: Keep the debounce sizing lookup O(1) and session-scoped.When
tabDatais missing, this falls back to filteringstate.conversation.items, which both walks the full active transcript on every watcher event and can size the debounce from the wrong session.SessionConversationalready exposestotalAIGroups, so the fallback can stay constant-time and only apply to the selected session.♻️ Suggested change
- const aiGroupCount = - tabData?.conversation?.totalAIGroups ?? - (state.conversation?.items ?? []).filter((i) => i.type === 'ai').length; + const aiGroupCount = + tabData?.conversation?.totalAIGroups ?? + (state.selectedSessionId === sessionId ? state.conversation?.totalAIGroups : 0) ?? + 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/store/index.ts` around lines 87 - 103, The debounce sizing walks the entire global transcript when tabData is missing and may size debounce for the wrong session; change aiGroupCount to be session-scoped and O(1) by using the selected session's conversation totalAIGroups instead of filtering state.conversation.items. Replace the fallback expression in the aiGroupCount computation so it uses state.tabSessionData[sessionId]?.conversation?.totalAIGroups (or 0 if missing) when tabData is absent, leaving the rest of the debounce logic (aiGroupCount thresholds and SESSION_REFRESH_DEBOUNCE_MS) unchanged.src/renderer/utils/contextTracker.ts (1)
857-894: Carry forward running totals instead of rescanningaccumulatedInjections.
previousInjections.push(...)removes the spread allocations, but this block still recomputes every category total by scanning the full accumulated array for each AI group. On the long-running sessions this PR targets, Phase 2 is still O(N²) CPU. Thread the previoustokensByCategoryandaccumulatedCountsforward, then update them fromnewInjectionsonly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/utils/contextTracker.ts` around lines 857 - 894, The current loop rescans accumulatedInjections to recompute tokensByCategory and accumulatedCounts causing O(N²) behavior; instead carry forward the previous tokensByCategory and accumulatedCounts and only apply increments for the newInjections batch. Replace the for..of over accumulatedInjections with iterating newInjections, updating tokensByCategory (claudeMd, mentionedFiles, toolOutputs, thinkingText, taskCoordination, userMessages) and accumulatedCounts using the same category branches as the switch (use injection.estimatedTokens, injection.toolCount, injection.breakdown.length, etc.), and remove the reinitialization of accumulatedCounts that wipes prior totals so previous values plus newInjections produce the running totals.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed. Inline comments: In `@src/renderer/components/layout/MoreMenu.tsx`: - Around line 74-88: The handleExport handler currently calls api.getSessionDetail without catching rejections, causing possible unhandled promise rejections; update handleExport (the useCallback that calls api.getSessionDetail, triggerDownload, setExportLoading, setIsOpen) to use async/await or add a .catch block so errors are handled: surround the API call in try/catch, log the error (e.g., console.error('Failed to export session', error)) and surface user feedback (toast or error state) on failure, and ensure setExportLoading(false) runs in a finally block so loading state is always cleared. In `@src/renderer/store/slices/sessionDetailSlice.ts`: - Around line 587-593: The code is using get().conversation (the global active-session transcript) when computing newConversation in refreshSessionInPlace(), which can reuse ChatItems from the wrong session; instead fetch the previous conversation from the target session object (the session being refreshed) and pass that into incrementalUpdateConversation. Replace references to get().conversation with the target session's conversation (e.g., targetSession.conversation or sessions[sessionId].conversation), and keep the same fallback to transformChunksToConversation when that target conversation is undefined or empty; still pass enhancedChunks, [] and isOngoing to incrementalUpdateConversation/transformChunksToConversation. In `@src/renderer/store/utils/batchAsync.ts`: - Around line 5-21: The batchAsync helper does not validate concurrency, so values <= 0 (or non-integers) prevent any workers from starting; in batchAsync, normalize the concurrency parameter before starting workers by coercing it to a positive integer (e.g. const effectiveConcurrency = Math.max(1, Math.floor(concurrency))) and use effectiveConcurrency in the Array.from call that starts workers; this ensures at least one worker runs (or alternatively throw if you prefer failing fast), and keeps the rest of the function (results array, worker function) unchanged. --- Outside diff comments: In `@src/renderer/store/slices/sessionDetailSlice.ts`: - Around line 640-708: The update replaces conversation/slimDetail for the session and tabs but doesn't refresh derived maps sessionClaudeMdStats, sessionContextStats, or sessionPhaseInfo, leaving token/context panels stale; run the same Phase 2 pipeline used in fetchSessionDetail() after swapping in newConversation/slimDetail to recompute those maps and write them into the store. Concretely: after creating slimDetail/newConversation and before set({ tabSessionData: latestTabSessionData }) (or inside the same set callback), invoke the internal Phase 2 logic from fetchSessionDetail() (or extract it into a helper like computeSessionDerivedMaps(sessionId, conversation, slimDetail)) to produce updated sessionClaudeMdStats, sessionContextStats, and sessionPhaseInfo entries for this sessionId, then include those updated maps in the state update and in latestTabSessionData entries so per-tab views see refreshed derived data. Ensure you reference and update the same keys used in the slice (sessionClaudeMdStats, sessionContextStats, sessionPhaseInfo) and reuse fetchSessionDetail() behavior to keep logic consistent. --- Nitpick comments: In `@src/main/index.ts`: - Around line 12-23: The import ordering in this file is wrong: move external package imports (electron's app, BrowserWindow, ipcMain; fs.existsSync; os.totalmem; path.join) above the path-alias imports (symbols DEFAULT_WINDOW_HEIGHT, DEFAULT_WINDOW_WIDTH, DEV_SERVER_PORT, getTrafficLightPositionForZoom, WINDOW_ZOOM_FACTOR_CHANGED_CHANNEL and createLogger) so the module list follows the repo rule (external packages first, then path aliases, then relative imports); update the import block so the same identifiers (DEFAULT_WINDOW_HEIGHT, createLogger, app, BrowserWindow, ipcMain, existsSync, totalmem, join, etc.) are preserved but reordered accordingly. - Around line 33-35: Rename the lowerCamel constants to UPPER_SNAKE_CASE: change totalMB to TOTAL_MB and heapMB to HEAP_MB, update the Math calculation assignment accordingly (e.g., const TOTAL_MB = Math.floor(totalmem() / (1024 * 1024)); const HEAP_MB = Math.min(4096, Math.max(2048, Math.floor(TOTAL_MB * 0.5)));), and propagate the new symbol HEAP_MB into the call to app.commandLine.appendSwitch('js-flags', `--max-old-space-size=${HEAP_MB}`) so all references match the naming guideline. In `@src/renderer/components/layout/MoreMenu.tsx`: - Line 40: The MoreMenu component's export handlers can trigger duplicate exports because exportLoading is not used to guard actions; update each export function (e.g., exportSelectedConversations, exportAllConversations, and any handlers referenced around lines ~74-90 and ~113-131) to check exportLoading at the top and return early if true, wrap the async export logic with setExportLoading(true) before the try and setExportLoading(false) in a finally block, and also use exportLoading to disable the export-related menu items/buttons (or prevent menu reopen) while the request is in flight so only one export can run at a time. In `@src/renderer/store/index.ts`: - Around line 87-103: The debounce sizing walks the entire global transcript when tabData is missing and may size debounce for the wrong session; change aiGroupCount to be session-scoped and O(1) by using the selected session's conversation totalAIGroups instead of filtering state.conversation.items. Replace the fallback expression in the aiGroupCount computation so it uses state.tabSessionData[sessionId]?.conversation?.totalAIGroups (or 0 if missing) when tabData is absent, leaving the rest of the debounce logic (aiGroupCount thresholds and SESSION_REFRESH_DEBOUNCE_MS) unchanged. In `@src/renderer/utils/contextTracker.ts`: - Around line 857-894: The current loop rescans accumulatedInjections to recompute tokensByCategory and accumulatedCounts causing O(N²) behavior; instead carry forward the previous tokensByCategory and accumulatedCounts and only apply increments for the newInjections batch. Replace the for..of over accumulatedInjections with iterating newInjections, updating tokensByCategory (claudeMd, mentionedFiles, toolOutputs, thinkingText, taskCoordination, userMessages) and accumulatedCounts using the same category branches as the switch (use injection.estimatedTokens, injection.toolCount, injection.breakdown.length, etc.), and remove the reinitialization of accumulatedCounts that wipes prior totals so previous values plus newInjections produce the running totals. ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 39fa130c-9c95-4fcf-8865-8a2335270d3e
📒 Files selected for processing (12)
src/main/index.tssrc/renderer/components/chat/ChatHistory.tsxsrc/renderer/components/common/TokenUsageDisplay.tsxsrc/renderer/components/layout/MoreMenu.tsxsrc/renderer/components/layout/TabBar.tsxsrc/renderer/store/index.tssrc/renderer/store/slices/sessionDetailSlice.tssrc/renderer/store/utils/batchAsync.tssrc/renderer/types/contextInjection.tssrc/renderer/utils/claudeMdTracker.tssrc/renderer/utils/contextTracker.tssrc/renderer/utils/groupTransformer.ts
| const handleExport = useCallback( | ||
| (format: ExportFormat) => { | ||
| if (activeTabSessionDetail) { | ||
| triggerDownload(activeTabSessionDetail, format); | ||
| } | ||
| if (activeTab?.type !== 'session' || !activeTab.projectId || !activeTab.sessionId) return; | ||
| const { projectId, sessionId } = activeTab; | ||
| setIsOpen(false); | ||
| setExportLoading(true); | ||
| // Re-fetch full detail (with chunks) since we strip them from the store to save memory. | ||
| void api | ||
| .getSessionDetail(projectId, sessionId) | ||
| .then((detail) => { | ||
| if (detail) triggerDownload(detail, format); | ||
| }) | ||
| .finally(() => { | ||
| setExportLoading(false); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash # 1) Confirm current export path has no catch handler. rg -n -C3 --type=tsx 'getSessionDetail\([^)]*\)|\.then\(|\.catch\(|\.finally\(' src/renderer/components/layout/MoreMenu.tsx # 2) Find existing renderer store error-state actions to wire this component consistently. rg -n -C2 --type=ts --type=tsx 'set[A-Za-z]*Error|report[A-Za-z]*Error|errorState|lastError' src/renderer/store src/rendererRepository: matt1398/claude-devtools
Length of output: 123
🏁 Script executed:
#!/bin/bash # Read the specific file to see the current implementation cat -n src/renderer/components/layout/MoreMenu.tsx | head -100Repository: matt1398/claude-devtools
Length of output: 3917
🏁 Script executed:
#!/bin/bash # Search for error handling patterns without type restrictions rg -n 'getSessionDetail|\.catch\(|\.finally\(' src/renderer/components/layout/MoreMenu.tsx -A 2 -B 2Repository: matt1398/claude-devtools
Length of output: 418
🏁 Script executed:
#!/bin/bash # Find renderer store error handling patterns find src/renderer/store -type f -name "*.ts" -o -name "*.tsx" | head -20Repository: matt1398/claude-devtools
Length of output: 882
🏁 Script executed:
#!/bin/bash # Check for error state management in renderer rg -n 'Error|error' src/renderer/store --type ts -l | head -10Repository: matt1398/claude-devtools
Length of output: 487
🏁 Script executed:
#!/bin/bash # Check notificationSlice for error handling patterns cat -n src/renderer/store/slices/notificationSlice.tsRepository: matt1398/claude-devtools
Length of output: 11118
🏁 Script executed:
#!/bin/bash # Look for error handling in store index cat -n src/renderer/store/index.ts | head -150Repository: matt1398/claude-devtools
Length of output: 7528
🏁 Script executed:
#!/bin/bash # Check triggerDownload function behavior cat -n src/renderer/utils/sessionExporter.tsRepository: matt1398/claude-devtools
Length of output: 16028
🏁 Script executed:
#!/bin/bash # Search for how errors are typically handled in renderer components rg -n 'console\.error|catch|try {' src/renderer/components -A 2 | head -50Repository: matt1398/claude-devtools
Length of output: 183
🏁 Script executed:
#!/bin/bash # Search for error toast/notification patterns rg -n 'toast|notification|showError|setError' src/renderer/components --type ts -A 2 -B 1 | head -60Repository: matt1398/claude-devtools
Length of output: 5693
🏁 Script executed:
#!/bin/bash # Look for try/catch patterns in other renderer components rg -n 'try \{' src/renderer/components -A 5 | head -80Repository: matt1398/claude-devtools
Length of output: 6922
🏁 Script executed:
#!/bin/bash # Check if there's a component-level error state pattern rg -n 'useState.*[Ee]rror' src/renderer/components -B 1 -A 1 | head -50Repository: matt1398/claude-devtools
Length of output: 808
Add error handling to the export API call.
The promise chain can reject without handling, causing unhandled rejections and silent export failures in the UI. Wrap the call in try/catch and provide user feedback on failure:
const handleExport = useCallback( async (format: ExportFormat) => { if (activeTab?.type !== 'session' || !activeTab.projectId || !activeTab.sessionId) return; const { projectId, sessionId } = activeTab; setIsOpen(false); setExportLoading(true); try { const detail = await api.getSessionDetail(projectId, sessionId); if (!detail) return; triggerDownload(detail, format); } catch (error) { console.error('Failed to export session', error); // Consider adding user feedback (e.g., toast notification or error state) } finally { setExportLoading(false); } }, [activeTab] );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/layout/MoreMenu.tsx` around lines 74 - 88, The handleExport handler currently calls api.getSessionDetail without catching rejections, causing possible unhandled promise rejections; update handleExport (the useCallback that calls api.getSessionDetail, triggerDownload, setExportLoading, setIsOpen) to use async/await or add a .catch block so errors are handled: surround the API call in try/catch, log the error (e.g., console.error('Failed to export session', error)) and surface user feedback (toast or error state) on failure, and ensure setExportLoading(false) runs in a finally block so loading state is always cleared. | // Use incremental update when a previous conversation exists — | ||
| // reuses unchanged ChatItem objects, only re-transforms the tail. | ||
| const prevConversation = get().conversation; | ||
| const newConversation = | ||
| prevConversation && prevConversation.items.length > 0 | ||
| ? incrementalUpdateConversation(prevConversation, enhancedChunks, [], isOngoing) | ||
| : transformChunksToConversation(enhancedChunks, [], isOngoing); |
There was a problem hiding this comment.
Use the target session’s previous conversation here.
get().conversation is the global active-session transcript. When refreshSessionInPlace() runs for a session that is only visible in another pane, this hands incrementalUpdateConversation() the wrong prefix and reuses ChatItems from a different session.
🐛 Suggested change
- const prevConversation = get().conversation; + const refreshState = get(); + const prevConversation = + refreshState.selectedSessionId === sessionId + ? refreshState.conversation + : Object.values(refreshState.tabSessionData).find( + (tabData) => tabData.sessionDetail?.session?.id === sessionId + )?.conversation ?? null;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/store/slices/sessionDetailSlice.ts` around lines 587 - 593, The code is using get().conversation (the global active-session transcript) when computing newConversation in refreshSessionInPlace(), which can reuse ChatItems from the wrong session; instead fetch the previous conversation from the target session object (the session being refreshed) and pass that into incrementalUpdateConversation. Replace references to get().conversation with the target session's conversation (e.g., targetSession.conversation or sessions[sessionId].conversation), and keep the same fallback to transformChunksToConversation when that target conversation is undefined or empty; still pass enhancedChunks, [] and isOngoing to incrementalUpdateConversation/transformChunksToConversation. | export async function batchAsync<T, R>( | ||
| items: T[], | ||
| fn: (item: T) => Promise<R>, | ||
| concurrency = 5 | ||
| ): Promise<R[]> { | ||
| const results = new Array<R>(items.length); | ||
| let nextIndex = 0; | ||
| | ||
| async function worker(): Promise<void> { | ||
| while (nextIndex < items.length) { | ||
| const index = nextIndex++; | ||
| results[index] = await fn(items[index]); | ||
| } | ||
| } | ||
| | ||
| await Promise.all( | ||
| Array.from({ length: Math.min(concurrency, items.length) }, () => worker()) |
There was a problem hiding this comment.
Validate concurrency before starting workers.
If concurrency <= 0, no worker is launched and this returns a sparse results array without processing anything. Since this helper is exported, it should fail fast or clamp the value.
🛡️ Suggested change
export async function batchAsync<T, R>( items: T[], fn: (item: T) => Promise<R>, concurrency = 5 ): Promise<R[]> { + if (!Number.isInteger(concurrency) || concurrency < 1) { + throw new RangeError('concurrency must be >= 1'); + } + const results = new Array<R>(items.length); let nextIndex = 0;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function batchAsync<T, R>( | |
| items: T[], | |
| fn: (item: T) => Promise<R>, | |
| concurrency = 5 | |
| ): Promise<R[]> { | |
| const results = new Array<R>(items.length); | |
| let nextIndex = 0; | |
| async function worker(): Promise<void> { | |
| while (nextIndex < items.length) { | |
| const index = nextIndex++; | |
| results[index] = await fn(items[index]); | |
| } | |
| } | |
| await Promise.all( | |
| Array.from({ length: Math.min(concurrency, items.length) }, () => worker()) | |
| export async function batchAsync<T, R>( | |
| items: T[], | |
| fn: (item: T) => Promise<R>, | |
| concurrency = 5 | |
| ): Promise<R[]> { | |
| if (!Number.isInteger(concurrency) || concurrency < 1) { | |
| throw new RangeError('concurrency must be >= 1'); | |
| } | |
| const results = new Array<R>(items.length); | |
| let nextIndex = 0; | |
| async function worker(): Promise<void> { | |
| while (nextIndex < items.length) { | |
| const index = nextIndex++; | |
| results[index] = await fn(items[index]); | |
| } | |
| } | |
| await Promise.all( | |
| Array.from({ length: Math.min(concurrency, items.length) }, () => worker()) | |
| ); | |
| return results; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/store/utils/batchAsync.ts` around lines 5 - 21, The batchAsync helper does not validate concurrency, so values <= 0 (or non-integers) prevent any workers from starting; in batchAsync, normalize the concurrency parameter before starting workers by coercing it to a positive integer (e.g. const effectiveConcurrency = Math.max(1, Math.floor(concurrency))) and use effectiveConcurrency in the Array.from call that starts workers; this ensures at least one worker runs (or alternatively throw if you prefer failing fast), and keeps the rest of the function (results array, worker function) unchanged.
Summary
The renderer crashes with exitCode 133 (V8 OOM) after ~24 hours of continuous uptime. This PR addresses the root causes with memory-only optimizations (split from #114 per feedback).
Changes
Incremental conversation update (
groupTransformer.ts)incrementalUpdateConversation()reuses unchanged ChatItem objects from the previous conversation — only re-transforms the last chunk + any new chunksFingerprint-based skip (
sessionDetailSlice.ts)chunkCount:lastChunkSize:isOngoingper sessionAdaptive debounce (
store/index.ts)totalAIGroups(not the global conversation) so a long session in one pane doesn't affect short sessions in other panesStrip raw session data (
sessionDetailSlice.ts)detail.chunksanddetail.processesnulled after transformation (slimDetail)O(N²) → O(N) fixes (
contextTracker.ts,claudeMdTracker.ts)[...prev, ...new]spread accumulation with mutable push looppreviousPaths: Set<string>to avoid redundant set recreationaccumulatedCountsonContextStatsreplaces per-render array filteringPhase 1/Phase 2 split (
sessionDetailSlice.ts)Other
batchAsyncutility for concurrency-limited IPC callsVIRTUALIZATION_THRESHOLD120 → 30 (virtualize sooner so fewer DOM nodes exist for large sessions — 30 is still large enough for casual sessions to render inline, but kicks in earlier for the 100+ item sessions that stress memory)50% of system RAM, clamped to [2 GB, 4 GB]— adapts to the machine instead of a hardcoded valueAddressing review feedback from #114
os.totalmem()(2–4 GB range)'completed'→'complete': fixed to matchAIGroupStatustypetotalAIGroupsinstead of global conversationTest plan
pnpm typecheck— no type errorspnpm lint:fix— no lint errorsSummary by CodeRabbit
Bug Fixes
Performance Improvements