BACK_TO_FEEDAICRIER_2
Core ML Runtime Flips Image Labels
OPEN_SOURCE ↗
REDDIT · REDDIT// 22d agoRESEARCH PAPER

Core ML Runtime Flips Image Labels

Liang Wang shows the same MobileNetV3 classifier can disagree with itself when run in PyTorch versus Core ML, with small perturbations flipping top-1 predictions on about 20% of tested ImageNet images. The post argues that runtime drift in deployment is a real reliability and safety issue, not just a numerical curiosity.

// ANALYSIS

This is the kind of paper that should make model teams uncomfortable in a useful way: benchmark wins in PyTorch do not guarantee the same behavior once the model hits an edge runtime. The nasty part is that the mismatch can show up without gradients, internals, or fancy attack tooling.

  • The risk looks infrastructure-level, not architecture-specific, because the drift comes from shared runtime behavior across common building blocks like Conv2D.
  • Core ML’s FP16 GPU path stands out as especially brittle in this write-up, which makes device/runtime selection part of the threat model.
  • Deployment validation should include cross-runtime regression tests, not just accuracy checks on the training framework.
  • The article’s 20% mismatch rate is preliminary, but it is high enough to justify treating runtime parity as a release gate.
  • For vision and edge-AI teams, this is a reminder that “same model” does not mean “same system.”
// TAGS
core-mlpytorchinferenceedge-airesearchsafety

DISCOVERED

22d ago

2026-03-21

PUBLISHED

22d ago

2026-03-20

RELEVANCE

8/ 10

AUTHOR

Tingxiaojue