Skip to content

records: David Ghazaryan — MoE + BigramHash4096 (val_bpb 1.11799)#1538

Open
davie2009kh wants to merge 1 commit intoopenai:mainfrom
davie2009kh:clean-submission
Open

records: David Ghazaryan — MoE + BigramHash4096 (val_bpb 1.11799)#1538
davie2009kh wants to merge 1 commit intoopenai:mainfrom
davie2009kh:clean-submission

Conversation

@davie2009kh
Copy link
Copy Markdown

Results

Seed val_bpb Artifact (bytes)
1337 1.11764880 15,873,596
42 1.11891002 15,893,104
2025 1.11742168 15,908,116
mean 1.11799350 15,891,605

Novel Contributions

  1. BigramHash4096 — expanded from SOTA's 3072 to 4096 buckets
  2. MoE MLP — first Mixture-of-Experts exploration in this repo (4 experts, top-2 routing)

Hardware

8× H100 80GB HBM3 (YSU HPC Cluster)

@MatoTeziTanka
Copy link
Copy Markdown

Community Review — records: David Ghazaryan — MoE + BigramHash4096 (val_bpb 1.11799)

BPB: 1.11799 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern)

What I found in the code (head SHA b58c2332e7ff, file records/track_10min_16mb/2026-04-05_David_MoE-Bigram4096/train_gpt.py):

The TTT path at line 2205 implements the score-first-per-chunk pattern: each chunk is scored under torch.no_grad() / inference_mode() before the base_model.train() + SGD adaptation runs on that same chunk, with an is_last_chunk guard so the final chunk gets no adaptation pass. This is the structural shape the legal frontier uses (PRs #1416 erichroepke, #1423 aryanbhosale).

Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk ci is scored under weights adapted only on chunks 0..ci-1. No prequant_ttt_adapt_adamw(val_tokens, ...) multi-epoch fine-tune, no scored-region SLOT, no target-in-key n-gram cache.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.11s, dim=512, layers=11, vocab=1024, code=111764 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass.

Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.11s, dim=512, layers=11, vocab=1024, code=111764 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants