BACK_TO_FEEDAICRIER_2
RYS II repeats layers, ships Qwen3.5 variants
OPEN_SOURCE ↗
REDDIT · REDDIT// 19d agoRESEARCH PAPER

RYS II repeats layers, ships Qwen3.5 variants

David Noel Ng's RYS II argues that Qwen3.5-27B develops a language-agnostic mid-stack "thinking space," where semantically matched English and Chinese prompts stay closer than same-language, different-content pairs. It also ships four FP8 RYS checkpoints on Hugging Face, spanning a near-zero-overhead S variant to a heavier XL version.

// ANALYSIS

This feels less like benchmark theater and more like a real circuit-level trick: the middle of the stack keeps acting like reusable computation, while the edges stay format-specific.

  • The centered similarity plots in the post are the strongest evidence for the "universal language" claim, because content beats language in the reasoning band. [blog](https://dnhkng.github.io/posts/rys-ii/)
  • Contiguous mid-stack blocks beat fancier multi-block compositions once overhead is counted, so the efficiency frontier rewards the simplest answer.
  • Single-layer repeats can move the needle, but the gains are smaller and skew more toward math than balanced improvement.
  • The scores come from the author's Math120/EQ140 probe suites, so read the result as strong internal evidence rather than a public leaderboard takeover. [Hugging Face](https://huggingface.co/dnhkng/RYS-Qwen3.5-27B-FP8-S)
  • The natural next step is a LoRA tune around the loopback junctions or pointer-based layer sharing, which could make the method cheaper and stronger.
// TAGS
rysqwenllmresearchbenchmarkopen-weightsopen-source

DISCOVERED

19d ago

2026-03-23

PUBLISHED

19d ago

2026-03-23

RELEVANCE

9/ 10

AUTHOR

Reddactor