The Week Before the Deal Closes
Every distribution deal has a phase nobody puts in the slide deck. The handshake is done. The lawyers are happy. The marketing plan is ready. And then somebody from standards and practices walks into a room and says, quietly, "we need to clear the catalog."
Clearing the catalog means walking every hour of content — every episode, every feature, every bonus clip — against the policy surface of the target network, the target platform, the target territory, and the target rating class. Language passes. Nudity. Violence. Drug use. Ethnic and religious sensitivities. Product placement disclosures. Brand safety thresholds. Flashing-light warnings for photosensitive epilepsy. Ad breaks that land cleanly on beats the FCC won't flag. Closed captions that meet the CVAA. Music cue sheets that line up with the PRO reporting for the territory. Loudness that conforms to the CALM Act in the US and R128 almost everywhere else.
A compliance team will tell you, candidly, that this is the step that kills more deals than any negotiation. Not because the content is bad — most of it clears, most of the time — but because the work is so slow and so manual that by the time the review is done, the launch window has moved and the business case has softened. Six weeks of three-reviewer rotation turns into ten weeks because somebody was out sick. Ten weeks turns into fourteen because a second territory got added halfway through. Fourteen weeks is when the deal renegotiates.
We have spent the last year building the opposite of that workflow. Not a replacement for the compliance reviewer — they are still the ones holding the pen — but a system that walks the catalog in parallel, flags everything that might be a problem, writes down why, hands the reviewer a list of candidates instead of a library, and drops markers directly onto the editor's timeline with annotations that say here is the shot, here is the rule it fails, here is the suggested fix.
This piece walks through how the workflow is built, what it does, how it works under the hood, and — most importantly — what it unlocks once you have it.
The Policy Surface Is Bigger Than Anyone Wants to Admit
Before describing the workflow, it's worth describing the problem it has to solve. The phrase "content compliance" gets thrown around as if it means one thing. It does not. A single episode of a single show, headed to a single international launch, has to clear against a list of policies that looks something like this:
- Rating body standards. MPA (formerly MPAA) for theatrical features, TV Parental Guidelines for broadcast in the US, BBFC for the UK, FSK for Germany, the ARCOM (formerly CSA) framework for France, CBFC for India, Eirin for Japan, the Kavi classification for Kenya, CBSC for Canada, ACMA and the Classification Board for Australia, and dozens more.
- Broadcast regulator standards. FCC indecency and profanity rules for US broadcast, Ofcom's watershed and harm-and-offence rules for the UK, the CALM Act for US loudness, EBU R128 for European loudness, and the territory equivalents of all of the above.
- Platform-specific standards. Every major streamer, ad-supported video platform, and FAST service maintains its own content policy document, updated quarterly, with rules that overlap with regulators but rarely match them. YouTube's advertiser-friendly guidelines are not the same as FCC indecency rules, which are not the same as the MPA rating rubric, which are not the same as an individual streamer's family-viewing tier thresholds.
- Brand safety frameworks. GARM (the Global Alliance for Responsible Media) publishes a brand suitability framework used across ad-supported platforms. IAB publishes a parallel set. Individual advertisers publish their own overlays on top of both.
- Accessibility standards. FCC CVAA rules for closed captions, described audio requirements for broadcast, WCAG-adjacent rules for on-demand platforms, and the corresponding regulations in other territories.
- Rights and music compliance. PRO cue sheets (ASCAP, BMI, PRS, GEMA, SACEM, JASRAC, and parallels), needle-drop reporting for library music, sync clearance tracking, festival rights windows, and the closely related problem of making sure the track in the final cut is the track in the cue sheet.
- Industry-specific codes. BCAP and CAP for UK advertising, NAD review standards in the US, AAAA industry codes, ESRB and PEGI for anything game-adjacent, and the Common Sense Media rubric when a family-viewing score is part of the platform deal.
- Photosensitive epilepsy thresholds. The Harding test and its regional equivalents, flash-rate and red-saturation limits, required warnings on broadcast and online.
- Territory-specific sensitivities. Religious symbols in some markets, political content in others, LGBTQ content in others still, language about specific conflicts, depictions of currency, depictions of firearms — the list is long and it changes.
No single reviewer knows all of this. A mid-sized compliance team might have three specialists across five of these categories, and for the rest they fall back on checklists, documents, and the institutional memory of whoever has been there the longest. Meanwhile the content keeps moving, the catalogs keep growing, and the deals keep closing on timelines that assume the clearance work will somehow get done.
The Shape of the Workflow
The Ceivo compliance workflow is designed around a simple premise: the reviewer should never be the system's first line of detection. They are the system's last line — the human who makes the final call — but by the time they are looking at a shot, the machinery should have already read the transcript, analyzed the frames, cross-referenced the rating body rubric, checked the territory overlays, and written a one-sentence explanation of why the shot might be a problem.
The workflow has five stages, and every stage runs against a policy catalog that lives outside the code — in markdown, editable by the compliance team, versioned like any other editorial artifact.
Stage one — policy loading. At the start of a run, the workflow loads a target policy profile: "US broadcast, TV-14, FCC daytime, advertiser-friendly, CVAA captions required, brand-safety overlay A." Each element of the profile points at a policy document — the TV Parental Guidelines rubric, the FCC indecency guidance, the CVAA requirements, the advertiser overlay. The documents are plain-text descriptions of what is allowed and what is not, written by the compliance team in language a language model can read. This is the system's rule book for the run.
Stage two — parallel analysis. For every asset in the catalog (or every asset in a working set), the workflow fires a set of analysis calls against the Ceivo MCP in parallel. Transcript analysis for language and context. Visual description analysis for nudity, violence, drug use, and weapon depiction. Audio analysis for loudness and profanity detection that the transcript might have missed. Scene-level marker analysis for flashing lights and rapid cuts. Metadata analysis for rating context, genre, and territory flags. Each call returns a set of timecoded observations, not verdicts.
Stage three — policy matching. The observations from stage two get matched against the policies loaded in stage one. This is where most naive compliance tools fall down — they match keywords to rules and hand the reviewer a flat list of flags. The Ceivo workflow runs the match through an LLM that knows the full context: the asset's rating, the target profile, the scene around the flag, the transcript excerpt, the visual description, and the specific rule text from the policy document. The LLM is not making the final call. It is writing down whether the observation plausibly triggers the rule, and if so, at what severity.
Stage four — reconciliation. Multiple rules can flag the same shot for different reasons, and sometimes they contradict each other. A scene that passes a US broadcast profile might fail a UK watershed profile. A shot that clears the MPA rubric might fail GARM's brand-safety overlay. The workflow reconciles the flags into a per-shot verdict that carries every rule that touched the shot, the severity of each, and the editorial reason each was triggered. This is the payload the reviewer will actually see.
Stage five — output. The reconciled verdicts get written into the asset's metadata as markers, with in/out points, severity, rule references, and suggested remediation. The markers are the thing the editor interacts with — in their timeline, in their tool, at the moment they need them.
None of these stages are magic. Every one of them is a set of tool calls against an MCP surface. The workflow that stitches them together is a Ceivo skill — a markdown file the compliance team can read, modify, and fork without touching any code. That matters enormously once you have to maintain it, because policy changes weekly, and the change is almost never code. It is rules.
The Adobe Panel: Markers in the Editor's Hands
The place the compliance workflow actually pays off is not in a report. It is in the editor's timeline.
We built a Ceivo panel for Adobe Premiere Pro (and the pattern extends to the rest of the Creative Cloud suite via the Adobe Creative Cloud integration we shipped in February) whose job is to surface the compliance markers directly in the interface the editor is already using. When an editor opens a clip that has been processed through the compliance workflow, the panel does three things.
It places markers on the timeline. Every flagged shot gets a marker at the in-point, color-coded by severity: green for advisory, yellow for consider trimming, red for must remediate to pass the target profile. The editor can jump to the next marker with a keyboard shortcut and scrub the flagged beat directly.
It annotates each marker with the rule and the reason. Click a marker and the panel shows the rule that was triggered ("FCC profanity — 'f-word' at 00:14:22, broadcast daytime"), the evidence that triggered it ("transcript excerpt, speaker: character A"), and the suggested remediation ("bleep from 00:14:21.18 to 00:14:22.04, or replace with alt-take B from the dailies bin"). The remediation suggestion is generated by the LLM using the full context — the policy rule, the scene, and any alternate takes the workflow found in adjacent assets.
It writes an edit list the editor can act on. Every marker in the sequence contributes to a running edit list the panel maintains: trim here, bleep there, remove this cutaway, replace this shot with that alt take, add a warning card at the cold open, adjust loudness in reel three. The editor can accept an item and the panel makes the cut. They can reject it and the panel logs the rejection with a reason back to Ceivo, so the compliance team has an audit trail. They can flag an item as ambiguous and the panel routes it back to a reviewer without blocking the rest of the edit.
This is the step that turns compliance from a back-office checklist into a creative tool. An editor who used to get a PDF full of timecodes and go scrub them manually now gets a timeline full of markers and an annotated edit list. The same job that took a week takes an afternoon, and the creative decisions — do I trim or do I bleep? — stay with the editor where they belong.
How It Works Under the Hood
The engineering pattern that makes this workflow possible is the same one we have been writing about across the last several thought-leadership pieces: capabilities and procedures live in different layers, and session state holds the intermediate results so nothing gets lost between calls.
The capabilities — transcript search, visual description analysis, loudness measurement, scene detection, marker writing, playlist assembly, policy document loading — all live in MCP servers. They are typed, versioned, observable, and callable from any MCP-aware agent runtime. The compliance team does not own them. Engineering owns them, and treats them like any other service.
The procedures — the compliance skill itself, the policy catalog, the territory overlays, the rule text — all live in markdown files. The compliance team owns them. When Ofcom updates its harm-and-offence guidance, a compliance lead edits the UK watershed policy document and the next workflow run picks up the change. No deployment, no ticket, no waiting on engineering. When a new streamer publishes its content policy, the team writes a new profile document, tests it against a sample, and adds it to the catalog.
Session state holds everything a single workflow run produces — every observation, every policy match, every rule trigger, every editorial reason — so the LLM reconciling the verdicts at stage four has the full history to reason over, and the reviewer looking at the output has a complete audit trail. Long-running runs across thousands of assets are not a problem, because no single LLM call ever has to hold the whole catalog in its context. The session state manager keeps the working memory on disk and hydrates the pieces the agent needs, when it needs them.
And because every tool call is logged and every LLM verdict carries its evidence, the whole thing is auditable. When a regulator or a broadcaster asks "why did you clear this shot?" the answer is not "the model said so." It is a chain of tool calls, a policy document version, a rule reference, and a reasoned verdict written in plain language. That is what compliance tooling is supposed to look like, and it has never been achievable at this speed before.
What This Unlocks: The Interesting Part
A workflow that clears a catalog faster is useful. A workflow that clears a catalog faster and produces a structured, reasoned, auditable record along the way is something else entirely — because that record becomes the input to a whole set of downstream workflows that used to be impossible.
Here are three that we are either building or actively exploring with customers.
Generative legal summaries
A compliance run produces a lot of output: hundreds or thousands of flags across a catalog, each with a rule reference, an editorial reason, and a severity. That's useful to an editor. It is not useful to a lawyer.
A lawyer needs a summary. "We ran the catalog against the target profile. Of 420 episodes, 402 cleared without modification. 16 require minor edits to pass. 2 are unlikely to pass regardless of edit. The primary risk themes are depictions of firearm use in a daytime slot, and three instances of trademarked brand visibility that conflict with the target's exclusivity clause." A language model can generate that summary directly from the structured workflow output, in the voice and format the legal team uses internally. It can do it per territory, per profile, per title, per episode, at whatever granularity the lawyer wants, and it can regenerate the summary the moment the catalog or the profile changes.
The reason this is interesting is not that it writes the summary. It is that the summary is grounded. Every sentence points back to a specific verdict, which points back to specific evidence, which points back to specific timecodes in specific assets. The legal team does not have to trust the summary. They can click through it, all the way down to the frame, and the chain of custody is intact.
Scoring the effort to enter a new deal
Here is the scenario every business development team has lived through at least once. A call comes in: "what would it take to sell this channel on this new network?" The answer requires clearing the channel's catalog against the target network's policy profile, estimating the remediation effort, and producing a credible number that the deal team can bring to the table.
Historically this is a two-week research project, usually handed to someone who isn't really the right person for it, and the answer is always hedged because nobody has run the numbers end-to-end. The deal team guesses. They quote high to protect themselves. The target network pushes back. The deal stalls.
With a compliance workflow that already knows the catalog and can load a new profile on demand, the same question becomes a single run. "Load the target network's policy profile. Run the catalog against it. Reconcile. Score the effort." The output is not a PDF. It is a structured estimate: per episode, expected remediation minutes, expected reviewer time, expected editor time, categorized by the kind of work (trims, bleeps, replacements, warning cards, loudness passes). Roll it up to a per-title or per-catalog estimate and the deal team has a grounded answer before the coffee is cold.
The same pattern works for territory expansion. "What does it take to launch this catalog in Germany?" becomes a run against the FSK profile, the German broadcasting regulations, the territory-specific sensitivities, and the local rights catalog. The answer comes back with an effort score, a per-title breakdown, and a list of titles that are unlikely to clear regardless of remediation. The deal team can make the call with data, not gut.
The same pattern works for platform migration. "What does it take to move our ad-supported tier to a new brand-safety overlay?" becomes a run against the GARM framework and the advertiser's custom overlay. Out comes the effort score. The CFO gets a number. The sales team gets a plan.
Continuous compliance for new distribution
The interesting generalization of all of this is that compliance does not actually happen once per deal. It happens every time anything changes. A new regulation lands. A new advertiser overlay gets published. A new title enters the catalog. A new territory opens. Every one of these events changes the clearance picture.
A compliance workflow that runs on demand, against a versioned policy catalog, with a structured audit trail and an Adobe panel waiting to surface the output in the editor's hands, is a compliance workflow that can run continuously. Every new title gets cleared against every profile it might ever face, before it ever gets into a deal conversation. Every policy update triggers a re-run of the affected profiles. Every deal inquiry gets an answer in hours instead of weeks. The clearance work stops being the bottleneck and starts being a standing state of the catalog.
Honest Limits
This workflow is not a replacement for compliance expertise. It will not be, and it is not supposed to be.
It will flag the wrong things. It will miss things that are obvious to a human reviewer but legible only in context the model doesn't have. It will over-flag, especially early in a new profile's life, until the compliance team has tuned the prompts and the policy documents to the target. It will run into edge cases — sarcasm, cultural references, regional slang, dog whistles — that require human judgment and will always require human judgment.
It is also not a legal product. Nothing the workflow produces is legal advice. The summaries are drafts for a lawyer to read, not rulings for a lawyer to file. The verdicts are candidates for a reviewer to confirm, not clearances for a distributor to act on. The Adobe panel is a tool for an editor to use, not a stamp of approval on the finished cut. The human is in the loop on every consequential decision, and the workflow exists to make that human's time worth more — not to remove them.
And it is not a silver bullet for catalogs that are under-indexed. A workflow that depends on transcripts, visual descriptions, loudness measurements, and scene markers is a workflow whose ceiling is set by the quality of those inputs. Ceivo can generate all of them on ingest, but they take time to build on a legacy library, and the compliance workflow runs better on a well-indexed catalog than a sparse one. The good news is that every hour of indexing work pays off across every future compliance run — it is a one-time cost against an infinite-horizon benefit.
The Pattern, One More Time
The reason this workflow works is not that it is clever. It is that it draws the same architectural lines we keep drawing.
Capabilities and procedures live in different layers. The MCP servers do the work. The skills and policy documents describe the work. Engineering owns the first. Compliance owns the second. Neither one is waiting on the other to ship.
Session state holds working memory. No LLM call has to hold the whole catalog, the whole policy surface, and the whole audit trail in its context. The session state manager keeps the working memory on disk and hydrates the pieces the agent needs when it needs them. Long-running workflows stop collapsing under their own weight.
LLMs reconcile, they do not rule. The workflow uses language models to compare observations, match them against rules, and draft editorial reasons. The final call is always a human's. The LLM's job is to make that call faster by doing the tedious assembly of evidence first.
Outputs land where the work happens. The markers do not go into a report. They go into the editor's timeline, through the Adobe panel, with annotations and a live edit list. The compliance work does not interrupt the edit — it becomes part of it.
That pattern, applied to compliance, turns a six-week process into a two-day one. Applied to the business questions on top of compliance — legal summaries, deal scoring, territory expansion, platform migration — it turns questions that used to take a quarter to answer into questions you can answer in a meeting.
What's Next
If you are staring down a catalog clearance project, a new distribution deal, a territory expansion, or just a standing compliance backlog that never seems to get shorter, we would like to show you what this workflow looks like against your content. We will set up a target profile, run a sample of the catalog through the pipeline, and walk you through the markers, the annotations, the edit list, and the reasoned verdicts — with the Adobe panel open so you can see how it lands for the editor.
Reach out and let's clear a reel together.