Moments Lab Secures $24 Million to Redefine Video Discovery With Agentic AI

Moments Lab, the AI firm redefining how organizations work with video, has raised $24 million in new funding, led by Oxx with participation from Orange Ventures, Kadmos, Supernova Make investments, and Elaia Companions. The funding will supercharge the corporate’s U.S. growth and assist continued growth of its agentic AI platform — a system designed to show large video archives into immediately searchable and monetizable property.

The guts of Moments Lab is MXT-2, a multimodal video-understanding AI that watches, hears, and interprets video with context-aware precision. It doesn’t simply label content material — it narrates it, figuring out individuals, locations, logos, and even cinematographic components like shot varieties and pacing. This natural-language metadata turns hours of footage into structured, searchable intelligence, usable throughout artistic, editorial, advertising and marketing, and monetization workflows.

However the true leap ahead is the introduction of agentic AI — an autonomous system that may plan, purpose, and adapt to a person’s intent. As a substitute of merely executing directions, it understands prompts like “generate a spotlight reel for social” and takes motion: pulling scenes, suggesting titles, choosing codecs, and aligning outputs with a model’s voice or platform necessities.

“With MXT, we already index video quicker than any human ever might,” stated Philippe Petitpont, CEO and co-founder of Moments Lab. “However with agentic AI, we’re constructing the following layer — AI that acts as a teammate, doing the whole lot from crafting tough cuts to uncovering storylines hidden deep within the archive.”

From Search to Storytelling: A Platform Constructed for Pace and Scale

Moments Lab is greater than an indexing engine. It’s a full-stack platform that empowers media professionals to maneuver on the pace of story. That begins with search — arguably probably the most painful a part of working with video at the moment.

Most manufacturing groups nonetheless depend on filenames, folders, and tribal information to find content material. Moments Lab modifications that with plain textual content search that behaves like Google in your video library. Customers can merely sort what they’re on the lookout for — “CEO speaking about sustainability” or “crowd cheering at sundown” — and retrieve actual clips inside seconds.

Key options embrace:

  • AI video intelligence: MXT-2 doesn’t simply tag content material — it describes it utilizing time-coded pure language, capturing what’s seen, heard, and implied.
  • Search anybody can use: Designed for accessibility, the platform permits non-technical customers to look throughout 1000’s of hours of footage utilizing on a regular basis language.
  • On the spot clipping and export: As soon as a second is discovered, it may be clipped, trimmed, and exported or shared in seconds — no want for timecode handoffs or third-party instruments.
  • Metadata-rich discovery: Filter by individuals, occasions, dates, areas, rights standing, or any customized aspect your workflow requires.
  • Quote and soundbite detection: Mechanically transcribes audio and highlights probably the most impactful segments — excellent for interview footage and press conferences.
  • Content material classification: Practice the system to type footage by theme, tone, or use case — from trailers to company reels to social clips.
  • Translation and multilingual assist: Transcribes and interprets speech, even in multilingual settings, making content material globally usable.

This end-to-end performance has made Moments Lab an indispensable companion for TV networks, sports activities rights holders, advert companies, and world manufacturers. Latest purchasers embrace Thomson Reuters, Amazon Adverts, Sinclair, Hearst, and Banijay — all grappling with more and more advanced content material libraries and rising calls for for pace, personalization, and monetization.

Constructed for Integration, Educated for Precision

MXT-2 is educated on 1.5 billion+ knowledge factors, lowering hallucinations and delivering excessive confidence outputs that groups can depend on. In contrast to proprietary AI stacks that lock metadata in unreadable codecs, Moments Lab retains the whole lot in open textual content, guaranteeing full compatibility with downstream instruments like Adobe Premiere, Last Minimize Professional, Brightcove, YouTube, and enterprise MAM/CMS platforms through API or no-code integrations.

“The actual energy of our system is not only pace, however adaptability,” stated Fred Petitpont, co-founder and CTO. “Whether or not you’re a broadcaster clipping sports activities highlights or a model licensing footage to companions, our AI works the best way your workforce already does — simply 100x quicker.”

The platform is already getting used to energy the whole lot from archive migration to dwell occasion clipping, editorial analysis, and content material licensing. Customers can share safe hyperlinks with collaborators, promote footage to exterior consumers, and even prepare the system to align with area of interest editorial types or compliance pointers.

From Startup to Customary-Setter

Based in 2016 by twin brothers Frederic Petitpont and Phil Petitpont, Moments Lab started with a easy query: What for those who might Google your video library? In the present day, it’s answering that — and extra — with a platform that redefines how artistic and editorial groups work with media. It has develop into probably the most awarded indexing AI within the video business since 2023 and exhibits no indicators of slowing down.

“After we first noticed MXT in motion, it felt like magic,” stated Gökçe Ceylan, Principal at Oxx. “That is precisely the form of product and workforce we search for — technically good, customer-obsessed, and fixing an actual, rising want.”

With this new spherical of funding, Moments Lab is poised to guide a class that didn’t exist 5 years in the past — agentic AI for video — and outline the way forward for content material discovery.