BACK_TO_FEEDAICRIER_2
TwelveLabs launches Pegasus 1.5 for time-based metadata
OPEN_SOURCE ↗
PH · PRODUCT_HUNT// 2h agoMODEL RELEASE

TwelveLabs launches Pegasus 1.5 for time-based metadata

Pegasus 1.5 is TwelveLabs’ new video model for converting raw footage into structured, timestamped metadata rather than just answering questions about clips. The release centers on a schema-first `/analyze` workflow where teams define what matters in their domain, then get back non-overlapping temporal segments with JSON outputs that can feed search, analytics, compliance, and automation pipelines. TwelveLabs positions it as a shift from clip-based QA to production-ready video data.

// ANALYSIS

Hot take: this is a real product shift, not a cosmetic model bump. TwelveLabs is trying to make video behave more like a database table than a blob of media.

  • The core change is boundary detection plus structured extraction across an entire video, which is much more operationally useful than clip-level Q&A.
  • The schema-first design is the right move for enterprise workflows where “what matters” differs by domain, whether that is sports, media, or brand monitoring.
  • The model is clearly aimed at downstream systems, not demos: the emphasis on valid JSON, timestamps, and non-overlapping segments matters more than fluent prose.
  • The main moat is not just model quality, but the evaluation and training stack built around temporal metrics and verifiable rewards.
  • If Pegasus 1.5 holds up in practice, it could reduce a lot of custom preprocessing and annotation glue code for teams working with large video libraries.
// TAGS
videomultimodalmetadatafoundation modelvideo-aienterprise-aitemporal-segmentation

DISCOVERED

2h ago

2026-04-20

PUBLISHED

7h ago

2026-04-20

RELEVANCE

9/ 10

AUTHOR

[REDACTED]