Opinion: AI is already part of post.
Now it needs guardrails
Amazon MGM Studios is reportedly building an internal "AI Studio" to speed up TV and film production and rein in rising costs. A closed beta is planned for March, with results expected by May. The stated intent is that the tools will be controlled by humans and used to streamline the creative process, rather than replace humans.
Whether Amazon ultimately earns trust is a separate question. But the move forces a conversation the film and television industry has largely avoided having in public: AI is already being used in post-production around the world, often operating under a tacit, don't-ask-don't-tell model. There are creative benefits of using various AI tools in post workflows to temporarily represent ideas – to aid in decision-making and idea testing – can be significant. However, in today's climate, admitting use of AI in a film or series translates to many as "we replaced people". Until guardrails and rules are clear, creators worry that transparency leads to controversy first and conversation after. That dynamic has been hard to ignore over the past couple of awards seasons, with scrutiny around AI use on films like The Brutalist and Emilia Pérez.
So the industry currently runs on an honour system: teams use AI to temp ideas, with the expectation that any AI output is removed before final delivery. There's often nothing in place to check or control that, which only feeds a sense of crisis around AI and its implementation. That's why Amazon's line that "humans are in control" matters – not as reassurance, but as a practical question. What does it actually mean to make AI flow through humans rather than around them?
AI is coming either way. The question is whether it comes as a shortcut that bypasses people, or as a governed set of tools embedded inside workflows where people remain responsible for authorship, taste, consent, and final decisions. If we establish that pathway early – AI running through humans – then as the tools develop, they're more likely to keep developing along that same route.
A clean way to handle the AI that's already creeping into post-production is to treat it as a temporary ideation tool – not a finishing tool. Keep the existing teams and workflows in place and add a closed set of AI capabilities which enable directors, editors and broader post-production teams to try ideas quickly and judge them in context. The point isn't to automate post or bypass processes; it's to remove friction from testing so the work we already do gets sharper and allows ideas to be better explored before moving down traditional pipelines.
In practice, this is already happening most clearly in voice. AI-assisted rough-ins around Automated Dialogue Replacement (ADR) are increasingly commonplace, but rarely discussed openly. ADR has been standard in post for decades, allowing directors and editors to add or tweak lines to tell the story better. Editorial will often rough something in so everyone can evaluate the change, and later the actor records the final line properly and it gets integrated in the final mix. The weak link is that rough stage. A scratch read can be wildly misleading. You can end up making decisions based on something that doesn't sound like the character, doesn't sit in the performance, and doesn't represent what the scene will actually feel like once it's properly recorded.
If AI is used to 'go around humans,' the temptation is obvious: generate something that sounds like the actor, drop it in, and chances are the audience will be none the wiser. That's where fear and anger make sense.
But if AI is used to 'flow through humans,' you can build a workflow that makes the temp stage more accurate without replacing the final stage at all. With explicit consent, an actor could provide project-specific voice material that is used only for that production, inside a segregated data environment, under clear terms. Editorial can temp lines that actually sound like the character and let the creative team judge the idea properly. Then, once the direction is clear, the work still goes through the traditional pipeline: ADR sessions, performance choices, sound editorial, the mix. The point isn't to remove actors or sound teams. The point is to remove guesswork.
That's a version of the basic promise of 'AI through humans': better placeholders, clearer creative decisions, and the same departments still doing the real work.
But that promise only holds if it's provable. At the moment, the industry is relying on everyone doing the right thing; that AI temps get stripped out before delivery, and production inputs don't get quietly repurposed as training data. The problem is there's rarely anything in place that makes either of those assumptions verifiable. If AI is in the workflow, you need guardrails that are real rather than implied: consent in writing, a segregated "copyright-safe island", checks before delivery, and deletion at the end of the project with receipts.
Netflix has published partner guidelines around secured environments and restrictions on storing, reusing, or training on production materials, and treating generated material as temporary. Helpful – but compliance still depends on self-reporting, and that only gets you so far. If these rules are going to stick, they need to be something productions don't have to negotiate from scratch every time.
This is where studios and streamers become unavoidably central. People are right to be wary of them building and controlling AI tools – cost pressure is real, and the incentive to cut corners is obvious. But they're also pivotal here because they have the leverage to push AI vendors into tighter terms, especially around what's allowed to be training data. Further, individual productions can be well intentioned and make good choices, but they can't easily turn those choices into a norm. A studio can make the guardrails part of commissioning and delivery: consent clauses as standard, tools kept inside segregated environments rather than third-party databases (see the Lionsgate–Runway ML deal from 2024), traceability and delivery checks built into the pipeline, and deletion certificates at the end of a project.
The point of making those guardrails standard isn't just protection for creatives – it's also about getting better work on screen. If AI is going to be used, the upside is in reducing friction in the process so teams can test more, decide earlier, and avoid late-stage compromises driven by time and money. But that upside only exists if it's kept inside enforceable boundaries. This isn't a plea to "embrace" AI. It's just saying: it's coming in one way or another, so we should shape where it sits in the pipeline before it is shaped for us.
So is Amazon's move a positive step? It can be, if it shifts the industry from secrecy to structure: away from an honour system and toward workflows that are transparent, consent-based, segregated, traceable, and auditable. Studio-level infrastructure can make those controls normal, rather than something each production has to reinvent. And that matters, because if AI is going to be part of post-production, the first priority is making sure it flows through humans rather than around them. But that cuts both ways. The same infrastructure can just as easily harden shortcuts into standard practice. Whether this becomes progress or a problem comes down to what gets built into the system – and whether "humans in control" is reflected in the contracts and the workflow, not just the messaging.