Non-record: 11L GEPA + 25k Steps + Pure Int6 + Legal TTT (val_bpb=1.0944) - unlimited compute category#644
Conversation
- Non-record unlimited-compute submission: val_bpb=1.0944 - 25000-step training (12000 peak-LR + 13000 warmdown) on 4xA100-40GB - Pure int6 per-row quantization with 15-candidate GPTQ-lite + zstd-22 - Legal score-first TTT (SGD, 10 epochs, momentum 0.9): -0.014 BPP gain - Float base 1.1088, artifact 13.75 MiB (14,496,936 bytes total) - Includes model artifact (final_model.int6.ptz) for reproducibility
Community Review — Non-record: 11L GEPA + 25k Steps + Pure Int6 + Legal TTT (val_bpb=1.0944) - unlimited compute categoryBPB: 1.0944 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 399 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.07s, dim=512, layers=11, vocab=1024, code=78689 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.07s, dim=512, layers=11, vocab=1024, code=78689 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
final_model.int6.ptz) for reproducibilityKey Result
Scaling Law (5 data points, warmdown is the dominant lever)
All three metrics improve monotonically: float base, TTT BPP, and artifact size.
Key Insight: Warmdown Acceleration
The BPP improvement accelerates in the final warmdown steps despite the cosine LR schedule decelerating:
This suggests fine-grained optimization at low LR is disproportionately effective.
Prior Submissions in This Series
Acknowledgments
Builds on techniques from: @signalrush (PR #414, GPTQ-lite/EMA), @jfprincz (PRs #287/#315, XSA/Partial RoPE/LN Scale), @unnir (PR #265, Efficient XSA), raahilshah (PR #162, SmearGate/BigramHash), @aruniyer (PR #86, Int6 QAT), samacqua (LoRA TTT), @abaybektursun (PR #549, LeakyReLU²), and the OpenAI baseline.