Skip to content

SP4096 + Depth Recurrence + Parallel Residuals + Legal N-Gram#1534

Open
someone114514 wants to merge 1 commit intoopenai:mainfrom
someone114514:sp4096-legal-ngram-only-0406
Open

SP4096 + Depth Recurrence + Parallel Residuals + Legal N-Gram#1534
someone114514 wants to merge 1 commit intoopenai:mainfrom
someone114514:sp4096-legal-ngram-only-0406

Conversation

@someone114514
Copy link
Copy Markdown

@someone114514 someone114514 commented Apr 11, 2026

Summary

This PR starts from the SP4096 recurrent / parallel-residual stack and adds a separate prefix-only legal n-gram evaluation path.

Measured result from the included run:

  • pre-quantization post-EMA: 1.09390451
  • final int6 sliding-window: 1.08719574
  • legal n-gram: 1.08457715
  • gain vs sliding: -0.00283638
  • total submission size: 15,967,527 bytes

Method

The training stack is unchanged from the original SP4096 recurrent base. The new scoring path is a legal n-gram overlay:

  1. Build prefix-only token / within-word / word-start experts from already-seen tokens.
  2. Run the frozen language model normally to obtain full-vocab logits.
  3. Apply a one-token bias from the chosen expert.
  4. Renormalize over the full vocabulary.
  5. Score the current token exactly once in a single left-to-right pass.

Details

  • Prefix-only state updates in online_ngram_state.c
  • Token n-gram expert
  • Within-word continuation expert
  • Word-start expert
  • One-token logit tilt plus full-vocab renormalization
  • No target-conditioned gating
  • No two-pass rescoring
  • No weight updates during evaluation

Included Files

  • README.md
  • submission.json
  • train_gpt.py
  • train_gpt_decompressed.py
  • online_best_agree_eval.py
  • online_ngram_state.c
@MatoTeziTanka
Copy link
Copy Markdown

MatoTeziTanka commented Apr 11, 2026

[RETRACTED 2026-04-11] — This IMPORT_FAIL was a false positive. Root cause: sibling module exists in same records/ folder; runner sys.path bug. Your code is not broken. See correction below: #1534 (comment)


Community Review — SP4096 + Depth Recurrence + Parallel Residuals + Legal N-Gram

Compliance: NEEDS AUTHOR ACTION — train_gpt.py fails to import on CT2038 (Python 3.10 / torch 2.10.0+cpu)

What I found: The CPU smoke test on CT2038 (proteus-engine, 128 GB RAM, Triton 3.6.0, flash_attn stub, cutlass_evt_fusion stub) failed at the import step with:

ModuleNotFoundError: No module named 'online_best_agree_eval' 

A few of the common patterns I've seen for this class of error in the 2026-04-11 sweep:

Recommendation: Could you run python3 -c "import py_compile; py_compile.compile('train_gpt.py')" on your records-folder train_gpt.py under Python 3.10 specifically? The eval image is Python 3.10 per Issue #17 / the README, so any parse error on 3.10 blocks the submission at import time before any of the scored-eval logic runs.

Once the parse/import issue is fixed, I'll re-run the compliance audit through the normal pipeline. No other flags identified yet because the audit halts at the import step.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): IMPORT_FAIL — ModuleNotFoundError: No module named 'online_best_agree_eval'. Classification via classify_prs.py AST-based classifier; full compliance audit deferred until the import issue is resolved. Auto-drafted from a template and spot-checked before posting.

@MatoTeziTanka
Copy link
Copy Markdown

Retraction — this IMPORT_FAIL was a bug in my smoke runner

Sorry @someone114514, this one's on me. I re-audited the IMPORT_FAIL I posted above and it was a false positive — the fault is in how my CPU smoke runner set up sys.path, not in your code.

What happened:

The runner imported your records/track_10min_16mb/2026-04-06_SP4096_LegalNgram/train_gpt.py with only the script's folder implicitly on sys.path, so when your file did from online_best_agree_eval import ... it couldn't resolve the sibling online_best_agree_eval.py that lives in the same 2026-04-06_SP4096_LegalNgram/ directory. The error I reported — ModuleNotFoundError: No module named 'online_best_agree_eval' — looked like a missing file, but I re-checked the head SHA 90e0287 and records/track_10min_16mb/2026-04-06_SP4096_LegalNgram/online_best_agree_eval.py is right there, committed to the PR, next to train_gpt.py.

Verified at head 90e0287:

records/track_10min_16mb/2026-04-06_SP4096_LegalNgram/online_best_agree_eval.py ← sibling module, exists records/track_10min_16mb/2026-04-06_SP4096_LegalNgram/train_gpt.py ← imports it 

On the real eval image (Python 3.10, records/*/ as the working dir), this import resolves correctly because the records folder ends up on sys.path via the standard cwd-driven import or via the eval harness's per-record entry point.

Your PR is not broken by this error. I'm retracting the IMPORT_FAIL classification. I'll re-queue the full compliance audit (BPB check, n-gram / TTT / SLOT flags, etc.) on the current head and post findings separately.

Again — sorry for the noise. These community reviews only work if I actually read what I'm reviewing, and I didn't in this case.

@MatoTeziTanka
Copy link
Copy Markdown

Community Review — SP4096 + Depth Recurrence + Parallel Residuals + Legal N-Gram

BPB: (not parsed — see PR title) | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache

What I found in the code (head SHA 90e0287ffd54, file records/track_10min_16mb/2026-04-06_SP4096_LegalNgram/train_gpt.py):

Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 2.74s, dim=512, layers=11, vocab=4096, code=24495 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline.

Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 2.74s, dim=512, layers=11, vocab=4096, code=24495 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants