Skip to content

perf: fix renderer heap exhaustion on long-running sessions#120

Open
Psypeal wants to merge 1 commit intomatt1398:mainfrom
Psypeal:perf/renderer-memory-optimization
Open

perf: fix renderer heap exhaustion on long-running sessions#120
Psypeal wants to merge 1 commit intomatt1398:mainfrom
Psypeal:perf/renderer-memory-optimization

Conversation

@Psypeal
Copy link
Contributor

@Psypeal Psypeal commented Mar 18, 2026

Summary

The renderer crashes with exitCode 133 (V8 OOM) after ~24 hours of continuous uptime. This PR addresses the root causes with memory-only optimizations (split from #114 per feedback).

Changes

Incremental conversation update (groupTransformer.ts)

  • New incrementalUpdateConversation() reuses unchanged ChatItem objects from the previous conversation — only re-transforms the last chunk + any new chunks
  • Reduces per-refresh allocation from O(N) to O(delta)
  • Post-passes clone groups before mutating to preserve store snapshot immutability

Fingerprint-based skip (sessionDetailSlice.ts)

  • Stores chunkCount:lastChunkSize:isOngoing per session
  • Skips transformation entirely when nothing changed (handles file-watcher double-fires)

Adaptive debounce (store/index.ts)

  • Scales refresh interval with session size: 60s for >1000 AI groups, 30s/10s/3s for smaller thresholds, 150ms default
  • Uses the target session's cached totalAIGroups (not the global conversation) so a long session in one pane doesn't affect short sessions in other panes

Strip raw session data (sessionDetailSlice.ts)

  • detail.chunks and detail.processes nulled after transformation (slimDetail)
  • Export re-fetches full detail on demand from IPC

O(N²) → O(N) fixes (contextTracker.ts, claudeMdTracker.ts)

  • Replace [...prev, ...new] spread accumulation with mutable push loop
  • Thread previousPaths: Set<string> to avoid redundant set recreation
  • Precomputed accumulatedCounts on ContextStats replaces per-render array filtering

Phase 1/Phase 2 split (sessionDetailSlice.ts)

  • Phase 1: push conversation to store immediately for rendering
  • Phase 2: fire-and-forget async context stats computation

Other

  • batchAsync utility for concurrency-limited IPC calls
  • Lower VIRTUALIZATION_THRESHOLD 120 → 30 (virtualize sooner so fewer DOM nodes exist for large sessions — 30 is still large enough for casual sessions to render inline, but kicks in earlier for the 100+ item sessions that stress memory)
  • Dynamic renderer heap: 50% of system RAM, clamped to [2 GB, 4 GB] — adapts to the machine instead of a hardcoded value

Addressing review feedback from #114

  • 8GB heap concern: replaced with dynamic limit based on os.totalmem() (2–4 GB range)
  • 'completed''complete': fixed to match AIGroupStatus type
  • Post-pass mutation: CompactGroup and isOngoing post-passes now clone before mutating
  • Adaptive debounce scope: now uses per-session totalAIGroups instead of global conversation
  • Scope: removed crash logging and task notification rendering (separate PRs)

Test plan

  • pnpm typecheck — no type errors
  • pnpm lint:fix — no lint errors
  • All 653 tests pass
  • Manual: run a long session (1h+), verify no crash and adaptive debounce scales
  • Manual: open multiple tabs with different session sizes, verify each gets its own debounce interval

Summary by CodeRabbit

  • Bug Fixes

    • Fixed session export to fetch current session data before download
    • Improved renderer memory allocation with dynamic heap sizing
  • Performance Improvements

    • Optimized chat rendering with adjusted virtualization threshold
    • Enhanced session refresh with adaptive timing based on content size
    • Streamlined session detail processing for faster updates
    • Improved token usage calculations efficiency
The renderer crashes with exitCode 133 (V8 OOM) after ~24 hours. Root causes: full conversation re-creation on every refresh, O(N²) injection accumulation, and raw session data stored in memory. Fixes: - Strip detail.chunks/processes after transformation (slimDetail) - Incremental conversation update — reuse unchanged ChatItem objects, only re-transform the tail (O(delta) vs O(N) per refresh) - Fingerprint-based skip when session content is unchanged - Adaptive debounce scaled per-session (60s for >1000 AI groups down to 150ms default); uses target session's totalAIGroups - batchAsync for concurrency-limited IPC calls - Phase 1/Phase 2 split: render conversation immediately, compute context stats async - O(N²) → O(N) injection accumulation in context/claudeMd trackers - Precomputed accumulatedCounts on ContextStats - Export re-fetches full detail on demand (MoreMenu) - Lower VIRTUALIZATION_THRESHOLD 120 → 30 (virtualize sooner to keep fewer DOM nodes for large sessions) - Dynamic renderer heap: 50% of system RAM clamped to [2–4 GB] Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the performance and memory efficiency of long-running sessions, specifically targeting renderer heap exhaustion. It introduces several memory-only optimizations, such as dynamic heap sizing, incremental conversation updates, adaptive refresh debouncing, and deferred context processing. These changes aim to reduce memory allocation, prevent O(N²) operations, and ensure a smoother user experience during extended usage.

Highlights

  • Dynamic Renderer Heap Sizing: Implemented dynamic calculation for the renderer's V8 heap size, setting it to 50% of system RAM, clamped between 2GB and 4GB, to adapt to different machine specifications.
  • Incremental Conversation Updates: Introduced an incremental update mechanism for conversations, reusing unchanged ChatItem objects to reduce memory allocation and improve performance during live session refreshes.
  • Adaptive Session Refresh Debounce: Implemented an adaptive debounce for session refreshes, dynamically adjusting the refresh interval (from 150ms to 60s) based on the target session's size to reduce memory churn.
  • Memory Optimization for Session Data: Optimized memory usage by stripping raw session 'chunks' and 'processes' data from the store after conversation transformation, re-fetching them on demand only for export operations.
  • O(N²) to O(N) Performance Fixes: Refactored context and CLAUDE.md trackers to use mutable push loops and thread 'previousPaths' sets, eliminating O(N²) operations and significantly improving performance for large sessions.
  • Two-Phase Session Detail Processing: Split session detail fetching into two phases: immediate rendering of the conversation for responsiveness, and deferred, asynchronous computation of context statistics to avoid blocking the UI.
  • Concurrency-Limited IPC Calls: Added a 'batchAsync' utility to limit the concurrency of IPC calls, preventing unbounded parallel operations that could exhaust the renderer heap.
  • Earlier Chat Virtualization: Lowered the 'VIRTUALIZATION_THRESHOLD' in chat history from 120 to 30, enabling earlier virtualization for large chat sessions and reducing the number of active DOM nodes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai coderabbitai bot added the bug Something isn't working label Mar 18, 2026
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is an excellent pull request that comprehensively addresses renderer heap exhaustion through a series of well-executed performance optimizations. The changes demonstrate a deep understanding of the performance bottlenecks and apply effective strategies to mitigate them.

The introduction of incremental conversation updates, fingerprint-based skipping, and adaptive debouncing will significantly reduce unnecessary processing and memory churn, especially in long-running sessions. The memory-saving techniques, such as stripping raw chunk data after transformation and fixing O(N²) complexities in context tracking, are crucial for improving the application's stability over time.

I was particularly impressed by the Phase 1/Phase 2 split for session loading, which will greatly enhance perceived performance by rendering the conversation immediately while deferring expensive statistics computation. The addition of the batchAsync utility for concurrent IPC calls and the dynamic renderer heap sizing are also smart improvements for robust resource management.

The code is high-quality, well-commented, and the changes are thoughtfully implemented with attention to details like immutability. Overall, this is a fantastic contribution that will substantially improve the performance and reliability of the application. I found no issues that meet the severity threshold for comments.

@coderabbitai
Copy link

coderabbitai bot commented Mar 18, 2026

📝 Walkthrough

Walkthrough

This PR introduces performance optimizations and refactors state management across the application. Changes include dynamic renderer heap limit configuration, adaptive session refresh debouncing, two-phase session detail processing with fingerprinting and batching, refactored context accumulation logic, incremental conversation updates, and UI prop changes to decouple session export from stored session details.

Changes

Cohort / File(s) Summary
Main Process Configuration
src/main/index.ts
Added dynamic renderer heap memory limit calculation: computes total RAM in MB and clamps heap size to [2048 MB, 4096 MB] range via Node.js flag --max-old-space-size, applied before app.whenReady().
UI Components
src/renderer/components/chat/ChatHistory.tsx, src/renderer/components/common/TokenUsageDisplay.tsx, src/renderer/components/layout/MoreMenu.tsx, src/renderer/components/layout/TabBar.tsx
Updated virtualization threshold from 120 to 30 in ChatHistory; refactored TokenUsageDisplay to use precomputed accumulatedCounts instead of array filtering; changed MoreMenu prop from activeTabSessionDetail object to activeTabHasSession boolean with new export loading state and api-based session detail fetching; updated TabBar to pass new boolean prop to MoreMenu.
Session State & Refresh
src/renderer/store/index.ts, src/renderer/store/slices/sessionDetailSlice.ts
Introduced adaptive debounce scaling for session refresh based on AI group count (60s for >1000 groups, 30s for >500, etc.); implemented two-phase session detail update flow with Phase 1 rendering conversation immediately and Phase 2 asynchronously computing CLAUDE.md/context stats; added fingerprint-based optimization to skip chunk transformations when inputs haven't changed; introduced incremental conversation updates and batched IPC/file-token fetches.
Utilities & Type System
src/renderer/store/utils/batchAsync.ts, src/renderer/types/contextInjection.ts, src/renderer/utils/claudeMdTracker.ts, src/renderer/utils/contextTracker.ts, src/renderer/utils/groupTransformer.ts
Added batchAsync utility for concurrency-limited batch processing of async operations; extended ContextStats interface with accumulatedCounts field; refactored claudeMdTracker and contextTracker to thread previousPaths through computation flow and mutate accumulatedInjections in-place instead of spreading; added incrementalUpdateConversation function to reuse unchanged ChatItems when appending new chunks, falling back to full transform when needed.

Possibly Related PRs

Suggested Labels

bug

📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can enforce grammar and style rules using `languagetool`.

Configure the reviews.tools.languagetool setting to enable/disable rules and categories. Refer to the LanguageTool Community to learn more.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/renderer/store/slices/sessionDetailSlice.ts (1)

640-708: ⚠️ Potential issue | 🟠 Major

Refresh the derived context maps alongside conversation.

This path swaps in newConversation/slimDetail for the session and every open tab, but it never clears or recomputes sessionClaudeMdStats, sessionContextStats, or sessionPhaseInfo. After a new AI group or compaction arrives, those maps no longer match the current conversation IDs, so the token/context panels show stale or missing data until a full fetch. The clean fix is to reuse the same Phase 2 pipeline from fetchSessionDetail() here as well.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/store/slices/sessionDetailSlice.ts` around lines 640 - 708, The update replaces conversation/slimDetail for the session and tabs but doesn't refresh derived maps sessionClaudeMdStats, sessionContextStats, or sessionPhaseInfo, leaving token/context panels stale; run the same Phase 2 pipeline used in fetchSessionDetail() after swapping in newConversation/slimDetail to recompute those maps and write them into the store. Concretely: after creating slimDetail/newConversation and before set({ tabSessionData: latestTabSessionData }) (or inside the same set callback), invoke the internal Phase 2 logic from fetchSessionDetail() (or extract it into a helper like computeSessionDerivedMaps(sessionId, conversation, slimDetail)) to produce updated sessionClaudeMdStats, sessionContextStats, and sessionPhaseInfo entries for this sessionId, then include those updated maps in the state update and in latestTabSessionData entries so per-tab views see refreshed derived data. Ensure you reference and update the same keys used in the slice (sessionClaudeMdStats, sessionContextStats, sessionPhaseInfo) and reuse fetchSessionDetail() behavior to keep logic consistent. 
🧹 Nitpick comments (5)
src/main/index.ts (2)

12-23: Reorder imports to match the repo’s import ordering rule.

Line 12 currently places @shared/* imports before external packages. Please move external imports (electron, fs, os, path) above path-alias imports.

Proposed diff
-import { - DEFAULT_WINDOW_HEIGHT, - DEFAULT_WINDOW_WIDTH, - DEV_SERVER_PORT, - getTrafficLightPositionForZoom, - WINDOW_ZOOM_FACTOR_CHANGED_CHANNEL, -} from '@shared/constants'; -import { createLogger } from '@shared/utils/logger'; import { app, BrowserWindow, ipcMain } from 'electron'; import { existsSync } from 'fs'; import { totalmem } from 'os'; import { join } from 'path'; +import { + DEFAULT_WINDOW_HEIGHT, + DEFAULT_WINDOW_WIDTH, + DEV_SERVER_PORT, + getTrafficLightPositionForZoom, + WINDOW_ZOOM_FACTOR_CHANGED_CHANNEL, +} from '@shared/constants'; +import { createLogger } from '@shared/utils/logger';

As per coding guidelines, **/*.{ts,tsx} should "Organize imports in order: external packages, path aliases (@main, @renderer, @shared, @preload), then relative imports".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/index.ts` around lines 12 - 23, The import ordering in this file is wrong: move external package imports (electron's app, BrowserWindow, ipcMain; fs.existsSync; os.totalmem; path.join) above the path-alias imports (symbols DEFAULT_WINDOW_HEIGHT, DEFAULT_WINDOW_WIDTH, DEV_SERVER_PORT, getTrafficLightPositionForZoom, WINDOW_ZOOM_FACTOR_CHANGED_CHANNEL and createLogger) so the module list follows the repo rule (external packages first, then path aliases, then relative imports); update the import block so the same identifiers (DEFAULT_WINDOW_HEIGHT, createLogger, app, BrowserWindow, ipcMain, existsSync, totalmem, join, etc.) are preserved but reordered accordingly. 

33-35: Rename heap constants to UPPER_SNAKE_CASE.

Lines 33-35 introduce constant values using totalMB and heapMB, which conflicts with the constant naming convention.

Proposed diff
-const totalMB = Math.floor(totalmem() / (1024 * 1024)); -const heapMB = Math.min(4096, Math.max(2048, Math.floor(totalMB * 0.5))); -app.commandLine.appendSwitch('js-flags', `--max-old-space-size=${heapMB}`); +const TOTAL_MB = Math.floor(totalmem() / (1024 * 1024)); +const HEAP_MB = Math.min(4096, Math.max(2048, Math.floor(TOTAL_MB * 0.5))); +app.commandLine.appendSwitch('js-flags', `--max-old-space-size=${HEAP_MB}`);

As per coding guidelines, **/*.{ts,tsx} should "Use UPPER_SNAKE_CASE for constant values".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/index.ts` around lines 33 - 35, Rename the lowerCamel constants to UPPER_SNAKE_CASE: change totalMB to TOTAL_MB and heapMB to HEAP_MB, update the Math calculation assignment accordingly (e.g., const TOTAL_MB = Math.floor(totalmem() / (1024 * 1024)); const HEAP_MB = Math.min(4096, Math.max(2048, Math.floor(TOTAL_MB * 0.5)));), and propagate the new symbol HEAP_MB into the call to app.commandLine.appendSwitch('js-flags', `--max-old-space-size=${HEAP_MB}`) so all references match the naming guideline. 
src/renderer/components/layout/MoreMenu.tsx (1)

40-40: Prevent overlapping exports while one export is in flight.

Without a loading guard, users can trigger duplicate exports by reopening the menu during an active request.

♻️ Suggested hardening
 const handleExport = useCallback( (format: ExportFormat) => { + if (exportLoading) return; if (activeTab?.type !== 'session' || !activeTab.projectId || !activeTab.sessionId) return; const { projectId, sessionId } = activeTab; setIsOpen(false); setExportLoading(true); @@ - [activeTab] + [activeTab, exportLoading] );

Also applies to: 74-90, 113-131

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/layout/MoreMenu.tsx` at line 40, The MoreMenu component's export handlers can trigger duplicate exports because exportLoading is not used to guard actions; update each export function (e.g., exportSelectedConversations, exportAllConversations, and any handlers referenced around lines ~74-90 and ~113-131) to check exportLoading at the top and return early if true, wrap the async export logic with setExportLoading(true) before the try and setExportLoading(false) in a finally block, and also use exportLoading to disable the export-related menu items/buttons (or prevent menu reopen) while the request is in flight so only one export can run at a time. 
src/renderer/store/index.ts (1)

87-103: Keep the debounce sizing lookup O(1) and session-scoped.

When tabData is missing, this falls back to filtering state.conversation.items, which both walks the full active transcript on every watcher event and can size the debounce from the wrong session. SessionConversation already exposes totalAIGroups, so the fallback can stay constant-time and only apply to the selected session.

♻️ Suggested change
- const aiGroupCount = - tabData?.conversation?.totalAIGroups ?? - (state.conversation?.items ?? []).filter((i) => i.type === 'ai').length; + const aiGroupCount = + tabData?.conversation?.totalAIGroups ?? + (state.selectedSessionId === sessionId ? state.conversation?.totalAIGroups : 0) ?? + 0;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/store/index.ts` around lines 87 - 103, The debounce sizing walks the entire global transcript when tabData is missing and may size debounce for the wrong session; change aiGroupCount to be session-scoped and O(1) by using the selected session's conversation totalAIGroups instead of filtering state.conversation.items. Replace the fallback expression in the aiGroupCount computation so it uses state.tabSessionData[sessionId]?.conversation?.totalAIGroups (or 0 if missing) when tabData is absent, leaving the rest of the debounce logic (aiGroupCount thresholds and SESSION_REFRESH_DEBOUNCE_MS) unchanged. 
src/renderer/utils/contextTracker.ts (1)

857-894: Carry forward running totals instead of rescanning accumulatedInjections.

previousInjections.push(...) removes the spread allocations, but this block still recomputes every category total by scanning the full accumulated array for each AI group. On the long-running sessions this PR targets, Phase 2 is still O(N²) CPU. Thread the previous tokensByCategory and accumulatedCounts forward, then update them from newInjections only.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/utils/contextTracker.ts` around lines 857 - 894, The current loop rescans accumulatedInjections to recompute tokensByCategory and accumulatedCounts causing O(N²) behavior; instead carry forward the previous tokensByCategory and accumulatedCounts and only apply increments for the newInjections batch. Replace the for..of over accumulatedInjections with iterating newInjections, updating tokensByCategory (claudeMd, mentionedFiles, toolOutputs, thinkingText, taskCoordination, userMessages) and accumulatedCounts using the same category branches as the switch (use injection.estimatedTokens, injection.toolCount, injection.breakdown.length, etc.), and remove the reinitialization of accumulatedCounts that wipes prior totals so previous values plus newInjections produce the running totals. 
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed. Inline comments: In `@src/renderer/components/layout/MoreMenu.tsx`: - Around line 74-88: The handleExport handler currently calls api.getSessionDetail without catching rejections, causing possible unhandled promise rejections; update handleExport (the useCallback that calls api.getSessionDetail, triggerDownload, setExportLoading, setIsOpen) to use async/await or add a .catch block so errors are handled: surround the API call in try/catch, log the error (e.g., console.error('Failed to export session', error)) and surface user feedback (toast or error state) on failure, and ensure setExportLoading(false) runs in a finally block so loading state is always cleared. In `@src/renderer/store/slices/sessionDetailSlice.ts`: - Around line 587-593: The code is using get().conversation (the global active-session transcript) when computing newConversation in refreshSessionInPlace(), which can reuse ChatItems from the wrong session; instead fetch the previous conversation from the target session object (the session being refreshed) and pass that into incrementalUpdateConversation. Replace references to get().conversation with the target session's conversation (e.g., targetSession.conversation or sessions[sessionId].conversation), and keep the same fallback to transformChunksToConversation when that target conversation is undefined or empty; still pass enhancedChunks, [] and isOngoing to incrementalUpdateConversation/transformChunksToConversation. In `@src/renderer/store/utils/batchAsync.ts`: - Around line 5-21: The batchAsync helper does not validate concurrency, so values <= 0 (or non-integers) prevent any workers from starting; in batchAsync, normalize the concurrency parameter before starting workers by coercing it to a positive integer (e.g. const effectiveConcurrency = Math.max(1, Math.floor(concurrency))) and use effectiveConcurrency in the Array.from call that starts workers; this ensures at least one worker runs (or alternatively throw if you prefer failing fast), and keeps the rest of the function (results array, worker function) unchanged. --- Outside diff comments: In `@src/renderer/store/slices/sessionDetailSlice.ts`: - Around line 640-708: The update replaces conversation/slimDetail for the session and tabs but doesn't refresh derived maps sessionClaudeMdStats, sessionContextStats, or sessionPhaseInfo, leaving token/context panels stale; run the same Phase 2 pipeline used in fetchSessionDetail() after swapping in newConversation/slimDetail to recompute those maps and write them into the store. Concretely: after creating slimDetail/newConversation and before set({ tabSessionData: latestTabSessionData }) (or inside the same set callback), invoke the internal Phase 2 logic from fetchSessionDetail() (or extract it into a helper like computeSessionDerivedMaps(sessionId, conversation, slimDetail)) to produce updated sessionClaudeMdStats, sessionContextStats, and sessionPhaseInfo entries for this sessionId, then include those updated maps in the state update and in latestTabSessionData entries so per-tab views see refreshed derived data. Ensure you reference and update the same keys used in the slice (sessionClaudeMdStats, sessionContextStats, sessionPhaseInfo) and reuse fetchSessionDetail() behavior to keep logic consistent. --- Nitpick comments: In `@src/main/index.ts`: - Around line 12-23: The import ordering in this file is wrong: move external package imports (electron's app, BrowserWindow, ipcMain; fs.existsSync; os.totalmem; path.join) above the path-alias imports (symbols DEFAULT_WINDOW_HEIGHT, DEFAULT_WINDOW_WIDTH, DEV_SERVER_PORT, getTrafficLightPositionForZoom, WINDOW_ZOOM_FACTOR_CHANGED_CHANNEL and createLogger) so the module list follows the repo rule (external packages first, then path aliases, then relative imports); update the import block so the same identifiers (DEFAULT_WINDOW_HEIGHT, createLogger, app, BrowserWindow, ipcMain, existsSync, totalmem, join, etc.) are preserved but reordered accordingly. - Around line 33-35: Rename the lowerCamel constants to UPPER_SNAKE_CASE: change totalMB to TOTAL_MB and heapMB to HEAP_MB, update the Math calculation assignment accordingly (e.g., const TOTAL_MB = Math.floor(totalmem() / (1024 * 1024)); const HEAP_MB = Math.min(4096, Math.max(2048, Math.floor(TOTAL_MB * 0.5)));), and propagate the new symbol HEAP_MB into the call to app.commandLine.appendSwitch('js-flags', `--max-old-space-size=${HEAP_MB}`) so all references match the naming guideline. In `@src/renderer/components/layout/MoreMenu.tsx`: - Line 40: The MoreMenu component's export handlers can trigger duplicate exports because exportLoading is not used to guard actions; update each export function (e.g., exportSelectedConversations, exportAllConversations, and any handlers referenced around lines ~74-90 and ~113-131) to check exportLoading at the top and return early if true, wrap the async export logic with setExportLoading(true) before the try and setExportLoading(false) in a finally block, and also use exportLoading to disable the export-related menu items/buttons (or prevent menu reopen) while the request is in flight so only one export can run at a time. In `@src/renderer/store/index.ts`: - Around line 87-103: The debounce sizing walks the entire global transcript when tabData is missing and may size debounce for the wrong session; change aiGroupCount to be session-scoped and O(1) by using the selected session's conversation totalAIGroups instead of filtering state.conversation.items. Replace the fallback expression in the aiGroupCount computation so it uses state.tabSessionData[sessionId]?.conversation?.totalAIGroups (or 0 if missing) when tabData is absent, leaving the rest of the debounce logic (aiGroupCount thresholds and SESSION_REFRESH_DEBOUNCE_MS) unchanged. In `@src/renderer/utils/contextTracker.ts`: - Around line 857-894: The current loop rescans accumulatedInjections to recompute tokensByCategory and accumulatedCounts causing O(N²) behavior; instead carry forward the previous tokensByCategory and accumulatedCounts and only apply increments for the newInjections batch. Replace the for..of over accumulatedInjections with iterating newInjections, updating tokensByCategory (claudeMd, mentionedFiles, toolOutputs, thinkingText, taskCoordination, userMessages) and accumulatedCounts using the same category branches as the switch (use injection.estimatedTokens, injection.toolCount, injection.breakdown.length, etc.), and remove the reinitialization of accumulatedCounts that wipes prior totals so previous values plus newInjections produce the running totals. 

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 39fa130c-9c95-4fcf-8865-8a2335270d3e

📥 Commits

Reviewing files that changed from the base of the PR and between 6a655b9 and fc9c67d.

📒 Files selected for processing (12)
  • src/main/index.ts
  • src/renderer/components/chat/ChatHistory.tsx
  • src/renderer/components/common/TokenUsageDisplay.tsx
  • src/renderer/components/layout/MoreMenu.tsx
  • src/renderer/components/layout/TabBar.tsx
  • src/renderer/store/index.ts
  • src/renderer/store/slices/sessionDetailSlice.ts
  • src/renderer/store/utils/batchAsync.ts
  • src/renderer/types/contextInjection.ts
  • src/renderer/utils/claudeMdTracker.ts
  • src/renderer/utils/contextTracker.ts
  • src/renderer/utils/groupTransformer.ts
Comment on lines 74 to +88
const handleExport = useCallback(
(format: ExportFormat) => {
if (activeTabSessionDetail) {
triggerDownload(activeTabSessionDetail, format);
}
if (activeTab?.type !== 'session' || !activeTab.projectId || !activeTab.sessionId) return;
const { projectId, sessionId } = activeTab;
setIsOpen(false);
setExportLoading(true);
// Re-fetch full detail (with chunks) since we strip them from the store to save memory.
void api
.getSessionDetail(projectId, sessionId)
.then((detail) => {
if (detail) triggerDownload(detail, format);
})
.finally(() => {
setExportLoading(false);
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash # 1) Confirm current export path has no catch handler. rg -n -C3 --type=tsx 'getSessionDetail\([^)]*\)|\.then\(|\.catch\(|\.finally\(' src/renderer/components/layout/MoreMenu.tsx # 2) Find existing renderer store error-state actions to wire this component consistently. rg -n -C2 --type=ts --type=tsx 'set[A-Za-z]*Error|report[A-Za-z]*Error|errorState|lastError' src/renderer/store src/renderer

Repository: matt1398/claude-devtools

Length of output: 123


🏁 Script executed:

#!/bin/bash # Read the specific file to see the current implementation cat -n src/renderer/components/layout/MoreMenu.tsx | head -100

Repository: matt1398/claude-devtools

Length of output: 3917


🏁 Script executed:

#!/bin/bash # Search for error handling patterns without type restrictions rg -n 'getSessionDetail|\.catch\(|\.finally\(' src/renderer/components/layout/MoreMenu.tsx -A 2 -B 2

Repository: matt1398/claude-devtools

Length of output: 418


🏁 Script executed:

#!/bin/bash # Find renderer store error handling patterns find src/renderer/store -type f -name "*.ts" -o -name "*.tsx" | head -20

Repository: matt1398/claude-devtools

Length of output: 882


🏁 Script executed:

#!/bin/bash # Check for error state management in renderer rg -n 'Error|error' src/renderer/store --type ts -l | head -10

Repository: matt1398/claude-devtools

Length of output: 487


🏁 Script executed:

#!/bin/bash # Check notificationSlice for error handling patterns cat -n src/renderer/store/slices/notificationSlice.ts

Repository: matt1398/claude-devtools

Length of output: 11118


🏁 Script executed:

#!/bin/bash # Look for error handling in store index cat -n src/renderer/store/index.ts | head -150

Repository: matt1398/claude-devtools

Length of output: 7528


🏁 Script executed:

#!/bin/bash # Check triggerDownload function behavior cat -n src/renderer/utils/sessionExporter.ts

Repository: matt1398/claude-devtools

Length of output: 16028


🏁 Script executed:

#!/bin/bash # Search for how errors are typically handled in renderer components rg -n 'console\.error|catch|try {' src/renderer/components -A 2 | head -50

Repository: matt1398/claude-devtools

Length of output: 183


🏁 Script executed:

#!/bin/bash # Search for error toast/notification patterns rg -n 'toast|notification|showError|setError' src/renderer/components --type ts -A 2 -B 1 | head -60

Repository: matt1398/claude-devtools

Length of output: 5693


🏁 Script executed:

#!/bin/bash # Look for try/catch patterns in other renderer components rg -n 'try \{' src/renderer/components -A 5 | head -80

Repository: matt1398/claude-devtools

Length of output: 6922


🏁 Script executed:

#!/bin/bash # Check if there's a component-level error state pattern rg -n 'useState.*[Ee]rror' src/renderer/components -B 1 -A 1 | head -50

Repository: matt1398/claude-devtools

Length of output: 808


Add error handling to the export API call.

The promise chain can reject without handling, causing unhandled rejections and silent export failures in the UI. Wrap the call in try/catch and provide user feedback on failure:

const handleExport = useCallback( async (format: ExportFormat) => { if (activeTab?.type !== 'session' || !activeTab.projectId || !activeTab.sessionId) return; const { projectId, sessionId } = activeTab; setIsOpen(false); setExportLoading(true); try { const detail = await api.getSessionDetail(projectId, sessionId); if (!detail) return; triggerDownload(detail, format); } catch (error) { console.error('Failed to export session', error); // Consider adding user feedback (e.g., toast notification or error state) } finally { setExportLoading(false); } }, [activeTab] );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/layout/MoreMenu.tsx` around lines 74 - 88, The handleExport handler currently calls api.getSessionDetail without catching rejections, causing possible unhandled promise rejections; update handleExport (the useCallback that calls api.getSessionDetail, triggerDownload, setExportLoading, setIsOpen) to use async/await or add a .catch block so errors are handled: surround the API call in try/catch, log the error (e.g., console.error('Failed to export session', error)) and surface user feedback (toast or error state) on failure, and ensure setExportLoading(false) runs in a finally block so loading state is always cleared. 
Comment on lines +587 to +593
// Use incremental update when a previous conversation exists —
// reuses unchanged ChatItem objects, only re-transforms the tail.
const prevConversation = get().conversation;
const newConversation =
prevConversation && prevConversation.items.length > 0
? incrementalUpdateConversation(prevConversation, enhancedChunks, [], isOngoing)
: transformChunksToConversation(enhancedChunks, [], isOngoing);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Use the target session’s previous conversation here.

get().conversation is the global active-session transcript. When refreshSessionInPlace() runs for a session that is only visible in another pane, this hands incrementalUpdateConversation() the wrong prefix and reuses ChatItems from a different session.

🐛 Suggested change
- const prevConversation = get().conversation; + const refreshState = get(); + const prevConversation = + refreshState.selectedSessionId === sessionId + ? refreshState.conversation + : Object.values(refreshState.tabSessionData).find( + (tabData) => tabData.sessionDetail?.session?.id === sessionId + )?.conversation ?? null;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/store/slices/sessionDetailSlice.ts` around lines 587 - 593, The code is using get().conversation (the global active-session transcript) when computing newConversation in refreshSessionInPlace(), which can reuse ChatItems from the wrong session; instead fetch the previous conversation from the target session object (the session being refreshed) and pass that into incrementalUpdateConversation. Replace references to get().conversation with the target session's conversation (e.g., targetSession.conversation or sessions[sessionId].conversation), and keep the same fallback to transformChunksToConversation when that target conversation is undefined or empty; still pass enhancedChunks, [] and isOngoing to incrementalUpdateConversation/transformChunksToConversation. 
Comment on lines +5 to +21
export async function batchAsync<T, R>(
items: T[],
fn: (item: T) => Promise<R>,
concurrency = 5
): Promise<R[]> {
const results = new Array<R>(items.length);
let nextIndex = 0;

async function worker(): Promise<void> {
while (nextIndex < items.length) {
const index = nextIndex++;
results[index] = await fn(items[index]);
}
}

await Promise.all(
Array.from({ length: Math.min(concurrency, items.length) }, () => worker())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate concurrency before starting workers.

If concurrency <= 0, no worker is launched and this returns a sparse results array without processing anything. Since this helper is exported, it should fail fast or clamp the value.

🛡️ Suggested change
 export async function batchAsync<T, R>( items: T[], fn: (item: T) => Promise<R>, concurrency = 5 ): Promise<R[]> { + if (!Number.isInteger(concurrency) || concurrency < 1) { + throw new RangeError('concurrency must be >= 1'); + } + const results = new Array<R>(items.length); let nextIndex = 0;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export async function batchAsync<T, R>(
items: T[],
fn: (item: T) => Promise<R>,
concurrency = 5
): Promise<R[]> {
const results = new Array<R>(items.length);
let nextIndex = 0;
async function worker(): Promise<void> {
while (nextIndex < items.length) {
const index = nextIndex++;
results[index] = await fn(items[index]);
}
}
await Promise.all(
Array.from({ length: Math.min(concurrency, items.length) }, () => worker())
export async function batchAsync<T, R>(
items: T[],
fn: (item: T) => Promise<R>,
concurrency = 5
): Promise<R[]> {
if (!Number.isInteger(concurrency) || concurrency < 1) {
throw new RangeError('concurrency must be >= 1');
}
const results = new Array<R>(items.length);
let nextIndex = 0;
async function worker(): Promise<void> {
while (nextIndex < items.length) {
const index = nextIndex++;
results[index] = await fn(items[index]);
}
}
await Promise.all(
Array.from({ length: Math.min(concurrency, items.length) }, () => worker())
);
return results;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/store/utils/batchAsync.ts` around lines 5 - 21, The batchAsync helper does not validate concurrency, so values <= 0 (or non-integers) prevent any workers from starting; in batchAsync, normalize the concurrency parameter before starting workers by coercing it to a positive integer (e.g. const effectiveConcurrency = Math.max(1, Math.floor(concurrency))) and use effectiveConcurrency in the Array.from call that starts workers; this ensures at least one worker runs (or alternatively throw if you prefer failing fast), and keeps the rest of the function (results array, worker function) unchanged. 
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

1 participant