Skip to content

Warp memory bloat and heavy swap usage when using tsh subshell (regression of #8100) #8934

@mammuthus

Description

@mammuthus

Pre-submit Checks

Describe the bug

Warp becomes extremely laggy and consumes excessive memory when using tsh subshell sessions.

During normal usage, Warp stays within ~400–500 MB RSS and <1 GB physical footprint. However, after opening a few tabs with tsh ssh sessions, memory usage grows significantly and does not drop over time.

In the degraded state:
• Physical footprint reaches ~7.3–7.4 GB
• System swap usage increases to ~14–15 GB
• Warp UI becomes very slow and barely responsive

This appears similar to a previously fixed issue:
#8100

However, the problem seems to have reappeared in newer versions.

To reproduce

1. Launch Warp 2. Open 2–3 tabs 3. Start tsh subshell sessions, for example: tsh ssh <host> 4. Work normally in these sessions for some time 5. Observe Warp resource usage and responsiveness 

Expected behavior

• Warp memory usage should remain stable over time • No excessive growth in physical memory footprint • No heavy swap usage • UI should remain responsive 

Screenshots, videos, and logs

No response

Operating system (OS)

macOS

Operating system and version

Tahoe 26.3 (25D125)

Shell Version

zsh 5.9 (arm64-apple-darwin25.0)

Current Warp version

v0.2026.03.18.08.24.stable_01

Regression

Yes, this bug started recently or with an X Warp version

Recent working Warp date

Not sure really

Additional context

Measured using:

PID=<warp_pid> while true; do date ps -o %cpu,%mem,rss -p "$PID" vmmap --summary "$PID" | grep 'Physical footprint' sysctl vm.swapusage echo "---" sleep 5 done 

Example output during degradation:
• RSS: ~1 GB
• Physical footprint: ~7.3–7.4 GB
• Swap usage: ~14–15 GB

CPU usage remains moderate (~20–40% of a single core), indicating the main issue is memory pressure rather than CPU load.

--- Mon Mar 23 11:12:29 +04 2026 %CPU %MEM RSS 36.5 5.4 1020336 Physical footprint: 7.8G Physical footprint (peak): 7.8G vm.swapusage: total = 16384.00M used = 15281.75M free = 1102.25M (encrypted) --- Mon Mar 23 11:12:36 +04 2026 %CPU %MEM RSS 36.2 5.4 1024320 Physical footprint: 7.8G Physical footprint (peak): 7.8G vm.swapusage: total = 16384.00M used = 15281.75M free = 1102.25M (encrypted) --- Mon Mar 23 11:12:42 +04 2026 %CPU %MEM RSS 30.6 5.4 1023952 Physical footprint: 7.8G Physical footprint (peak): 7.8G vm.swapusage: total = 16384.00M used = 15281.75M free = 1102.25M (encrypted) --- Mon Mar 23 11:12:49 +04 2026 %CPU %MEM RSS 39.1 5.6 1050160 Physical footprint: 7.8G Physical footprint (peak): 7.9G vm.swapusage: total = 16384.00M used = 15281.75M free = 1102.25M (encrypted) --- Mon Mar 23 11:12:55 +04 2026 %CPU %MEM RSS 30.8 5.4 1028192 Physical footprint: 7.8G Physical footprint (peak): 7.9G vm.swapusage: total = 16384.00M used = 15281.75M free = 1102.25M (encrypted) --- Mon Mar 23 11:13:01 +04 2026 %CPU %MEM RSS 35.6 5.4 1027456 Physical footprint: 7.8G Physical footprint (peak): 7.9G vm.swapusage: total = 16384.00M used = 15281.75M free = 1102.25M (encrypted) --- Mon Mar 23 11:13:08 +04 2026 %CPU %MEM RSS 36.0 5.5 1038736 Physical footprint: 7.8G Physical footprint (peak): 7.9G vm.swapusage: total = 16384.00M used = 15281.75M free = 1102.25M (encrypted) --- 
vmmap 16174 | sed -n '/MALLOC ZONE/,/TOTAL/p' MALLOC_LARGE 1.6G 1.1G 1.1G 278.3M 0K 0K 8224K 76 see MALLOC ZONE table below MALLOC_LARGE (empty) 347.6M 0K 0K 340.1M 0K 0K 0K 19 see MALLOC ZONE table below MALLOC_LARGE (reserved) 356.0M 0K 0K 0K 0K 0K 0K 2 see MALLOC ZONE table below MALLOC_NANO metadata 288K 144K 144K 144K 0K 0K 0K 9 see MALLOC ZONE table below MALLOC_REALLOC 156.5M 130.9M 130.9M 0K 0K 0K 0K 2 see MALLOC ZONE table below MALLOC_REALLOC (empty) 136.9M 0K 0K 136.9M 0K 0K 0K 3 see MALLOC ZONE table below MALLOC_SMALL 10.0G 168.4M 168.4M 9.6G 0K 0K 1936K 2632 see MALLOC ZONE table below MALLOC_SMALL (empty) 180.4M 224K 224K 1520K 0K 0K 0K 57 see MALLOC ZONE table below MALLOC_TINY 4096K 224K 224K 64K 0K 0K 0K 1 see MALLOC ZONE table below STACK GUARD 944K 0K 0K 0K 0K 0K 0K 59 Stack 120.6M 1392K 1392K 3824K 0K 0K 0K 69 Stack (reserved) 544K 0K 0K 0K 0K 0K 0K 1 reserved VM address space (unallocated) Stack Guard 56.0M 0K 0K 0K 0K 0K 0K 2 VM_ALLOCATE 1552K 32K 32K 624K 0K 0K 0K 66 VM_ALLOCATE (reserved) 7040K 0K 0K 0K 0K 0K 0K 55 reserved VM address space (unallocated) __AUTH 5879K 1707K 153K 185K 0K 0K 0K 648 __AUTH_CONST 89.2M 27.3M 32K 80K 0K 0K 0K 1028 __CTF 824 824 0K 0K 0K 0K 0K 1 __DATA 34.9M 7686K 761K 554K 0K 0K 0K 982 __DATA_CONST 36.5M 17.1M 1856K 1824K 0K 0K 0K 1038 __DATA_DIRTY 8449K 2070K 952K 808K 0K 0K 0K 887 __FONT_DATA 2352 2352 0K 0K 0K 0K 0K 1 __INFO_FILTER 8 8 0K 0K 0K 0K 0K 1 __LINKEDIT 662.0M 11.1M 0K 0K 0K 0K 0K 5 __OBJC_RO 78.4M 39.3M 0K 0K 0K 0K 0K 1 __OBJC_RW 2571K 2235K 43K 0K 0K 0K 0K 1 __TEXT 1.4G 342.0M 0K 0K 0K 0K 0K 1061 __TPRO_CONST 128K 32K 32K 80K 0K 0K 0K 2 mapped file 2.7G 4896K 0K 0K 0K 0K 0K 372 owned unmapped memory 125.1M 0K 121.4M 2720K 0K 0K 0K 1 page table in kernel 8236K 8236K 8236K 0K 0K 0K 0K 1 shared memory 1152K 144K 144K 352K 0K 0K 0K 32 unused but dirty shlib __DATA 416K 122K 122K 294K 0K 0K 0K 313 =========== ======= ======== ===== ======= ======== ====== ===== ======= TOTAL 18.4G 2.0G 1.7G 10.4G 0K 136.2M 14.0M 9753 MALLOC ZONE SIZE SIZE SIZE SIZE COUNT ALLOCATED FRAG SIZE % FRAG COUNT =========== ======= ========= ========= ========= ========= ========= ========= ====== ====== DefaultMallocZone_0x1135a8000 11.9G 1.4G 1.4G 9.9G 5442091 11.7G 0K 0% 2820 DefaultPurgeableMallocZone_0x133e68000 9.9M 0K 0K 16K 5 9.9M 0K 0% 6 QuartzCore_0x113af8000 1024K 672K 672K 240K 3391 263K 649K 72% 33 LSBindingEvaluator_0x114670000 160K 160K 160K 0K 0 0K 160K 100% 4 =========== ======= ========= ========= ========= ========= ========= ========= ====== ====== TOTAL 12.0G 1.4G 1.4G 9.9G 5445487 11.7G 0K 0% 2863 
• MALLOC_SMALL: 10.0G virtual / 9.6G swapped • TOTAL swapped: 10.4G • DefaultMallocZone: ~12G allocated • ~5.4M allocations 

Does this block you from using Warp daily?

Yes, this issue prevents me from using Warp daily.

Is this an issue only in Warp?

Yes, I confirmed that this only happens in Warp, not other terminals.

Warp Internal (ignore): linear-label:b9d78064-c89e-4973-b153-5178a31ee54e

None

Metadata

Metadata

Assignees

No one assigned

    Labels

    BUGBugs, Hangs, Crash, and Freezes

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions