Skip to content

RecurLoRA v2: Pass Index Embeddings + Low-Rank Adapters on SP8192 Depth Recurrence#1552

Open
Tanush1912 wants to merge 1 commit intoopenai:mainfrom
Tanush1912:submission/recurlora-v2-sp8192
Open

RecurLoRA v2: Pass Index Embeddings + Low-Rank Adapters on SP8192 Depth Recurrence#1552
Tanush1912 wants to merge 1 commit intoopenai:mainfrom
Tanush1912:submission/recurlora-v2-sp8192

Conversation

@Tanush1912
Copy link
Copy Markdown

Summary

  • Pass index embeddings: learned per-pass vectors added to hidden states before repeated layer execution, giving shared weights a "which iteration am I?" signal (3072 params, 6KB)
  • RecurLoRA: rank-2 LoRA corrections on attention projections (Q,K,V,O) for repeated passes (21K params, 42KB)
  • Built on the current frontier stack: SP8192, 3-layer depth recurrence (layers 3-5), parallel residuals L7+, SDClip, MuonEq-R, QK-Gain 5.25, score-first TTT
  • Total novel overhead: 48KB (0.3% of 16MB budget), kept as fp16 passthrough

Why this direction

The current SOTA (PR #1493, 1.0810 BPB) uses 3-layer recurrence where layers 3-5 execute identically on every pass — no mechanism distinguishes pass 1 from pass 3. This submission adds two complementary per-pass specialization mechanisms:

  • Pass embeddings modify the input to shared layers (inspired by Universal Transformers, Dehghani et al. 2019)
  • LoRA corrections modify the attention weights of shared layers

Together they allow shared layers to condition behavior on recurrence depth at negligible cost, without incurring the quantization error amplification that kills deeper recurrence.

Status

Implementation complete and validated:

  • Syntax, class structure, and method signatures verified
  • Pass counting logic confirmed correct across encoder/decoder traversal
  • LoRA shape compatibility with weight banks verified
  • All new parameters are fp16 passthrough, avoiding additional quantization error

Full training runs (3 seeds + ablations) pending compute.

Test plan

  • Full stack without RecurLoRA/pass embeddings — confirm baseline matches ~1.081 BPB
  • Full stack with pass embeddings only — isolate embedding contribution
  • Full stack with RecurLoRA + pass embeddings — measure combined effect (3 seeds)
  • Report mean +/- std across seeds
… stack Two novel per-pass specialization mechanisms for recurrent layers: - Pass index embeddings (3072 params): learned vectors added to hidden states before repeated execution, inspired by Universal Transformers - Rank-2 LoRA on attention (21K params): per-pass Q,K,V,O corrections Built on frontier stack: SP8192, 3-layer recurrence (layers 3-5), parallel residuals L7+, SDClip (int6 matrix/int8 embed), MuonEq-R, QK-Gain 5.25, score-first TTT. Total overhead: 48KB (0.3% of budget).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

1 participant