OPEN_SOURCE ↗
REDDIT · REDDIT// 4h agoMODEL RELEASE
Tencent drops Hy3 preview weights
Tencent released Hy3 preview, a 295B-parameter MoE language model with 21B active parameters, 256K context, and weights on Hugging Face, ModelScope, and GitCode. The model targets reasoning, instruction following, coding, and agent workflows, with vLLM and SGLang deployment paths.
// ANALYSIS
Hy3 preview is a serious open-weights entrant, but its practical audience is narrower than the headline suggests: big-GPU operators, benchmark watchers, and teams comfortable with Tencent's custom license.
- –21B active parameters makes the MoE design efficient on paper, but serving still calls for 8 large-memory GPUs, so this is not a casual local model
- –Tencent is leaning into coding and agent evals, including SWE-bench Verified, Terminal-Bench 2.0, BrowseComp, and internal workflow benchmarks
- –The 256K context window and OpenAI-compatible serving path make it more interesting for long-context agent stacks than simple chat replacement
- –Community reaction is mixed: users like another open-weights option, but early comments flag restrictive licensing and coding performance that may trail smaller Qwen-class models
- –The release matters less as a single model drop than as evidence Tencent is rebuilding Hunyuan around product feedback loops and agent workloads
// TAGS
hy3-previewllmopen-weightsreasoningai-codingagentinference
DISCOVERED
4h ago
2026-04-23
PUBLISHED
6h ago
2026-04-23
RELEVANCE
9/ 10
AUTHOR
Namra_7