This actually looks pretty good. The key takeaway I got was that they know their business is dependent upon Intellectual Property rights, and that Generative AI in final outputs or productive work undermines the foundation of their future success vis a vis discounting or dismissing IP Law and Rights.
That’s likely to be the middle ground going forward for the smarter creative companies, and I’m personally all for it. Sure, use it for a pitch, or a demo, or a test - but once there’s money on the line (copyright in particular), get that shit outta there because we can’t own something we stole from someone else.
Or they can do like Call of duty, that just makes skins "heavily inspired" by other franchises they don't own, the week Borderlands 4 came out they put a few cell shaded skins that heavily resembles the look of that game's characters, there is one that skin that is pretty much like reptile from mortal Kombat called "vibrant serpent", they got a bit of heat in May of this year for releasing a skin that looked too much like one from another game called High On Life, and the list goes on. It reminds me a lot of the disguises they sell on Spirit Halloween during every October.
And yes I know they do legal and agreed partnerships like with the Predator franchise, or the Beavis and Butt-Head franchise (yes they exist in CoD now...), and those only count for a tiny number of the premium skins.
The Call of Duty series makes me so sad. I remember when cod 4 came out it felt like a genuinely groundbreaking and innovative thing and I was so pumped to see what IW did next. And then Activision took all of that talent that was genuinely exploring new ground in game development and stuck them in the yearly rerelease of the same damn game mill until everyone got burnt out and left.
For the record, Arc Raiders (just released) makes me feel like I'm back playing MW2 in the golden days. Just in the sense of playing an awesome game and riding the wave of popularity with everyone else.
I've been trying to find time here and there to get the tumbleweeds out of my gaming pc just so I can try that game. Reviews and streams for it remind me a bit of the Dark Zone experience when the first Division game came out.
Arc Raiders, and their previous game The Finals, uses AI in some capacity for Voice Acting - though they do still hire VA and make it explicit in their contract offer
>Some of the voice lines were created using generative artificial intelligence tools, using an original sample of voice lines from voice actors hired specifically with an AI-use contractual clause, similar to the studio's production process in The Finals.
I thought they were on biyearly swapping with treyarch?
Cod4 in some ways was the beginning of the end for a lot that we took for granted in gaming up to that point. I remember when it released and a couple of us went to my friends house to play it. Boy were we in for a shock when there was no coop multiplayer like halo 3.
Not me, the mix of parkour with multiplayer shooting with beautiful highly detailed maps it's something I like a lot, nothing even compares in that regard, I know the game is a shameless skin store but I do appreciate the former, although I also hate how small a lot of maps are, glances at Nuketown
Totally, Titanfall 2 is one of my favorite games ever, but by the time I discovered the multiplayer was pretty much dead, no players and no recent updates.
I hate how parkour infested the fps genre. There's this whole meta now that I don't care about at all yet one has to learn if you don't want to go 3 and 12 and its in most games now.
It has been that way for decades, but prior the parkour stuff was exploiting bugs in game engines and only the top 1% or less of players could even pull off the complex inputs needed.
Personally, I was in the top 10% of HL2DM players but because I couldn't master the inputs for skating I wasn't able to compete with the truly elite tier players who would zip around the map at breakneck speeds.
It's partly about Netflix getting sued by someone claiming infringement, but also partly (maybe mostly) about Netflix maintaining their right to sue others for infringement.
The scenario looks like this:
* Be Netflix. Own some movie or series where the main elements (plot, characters, setting) were GenAI-created.
* See someone else using your plot/characters/setting in their own for-profit works.
* Try suing that someone else for copyright infringement.
* Get laughed out of court because the US Copyright Office has already said that GenAI is not copyrightable. [1]
Other than that, just a bit of common sense tells you all you need to know about where the data comes from (datasets never released, outputs of the LLMs suspisciously close to original copyrighted content, AI founders openly saying that paying for copyrighted content is too costly etc. etc. etc.)
Yeah. No. This document says, “our strategy is wait and see.” It’s the most disruptive media technology since the TV. And they’re like, “whatever.” That is not the move of a “smarter” creative company. Lawyers are really, really bad at running companies, even if you have strong opinions about the law.
Disruptive does not mean good, or useful, or important, or valuable. There is no reason to jump onto a thing early just because it is disruptive: Netflix exists in a different creative world than the tech industry, and its audiences are even more hostile to the idea that AI is being used to steal from the things and people they admire than the audiences of typical tech industry disruptions. People who care about art and artists and films and actors tend not to value slop.
Nobody values slop, and not everything is slop, AI or otherwise. Also, stealing is not the same as copyright infringement, unless you subscribe to the RIAA definition of the word.
AI has no intent or creativity, so it can be neither right nor wrong, neither good nor bad.
So just as there's no procedural difference between an AI getting something right and an AI "hallucinating", if the word "slop" describes anything AI generates, it describes all of it.
Either everything generative AI creates is slop or nothing is. So everything is.
Also I know stealing is not the same thing as copyright infringement. I'm talking about stealing livelihoods as much as stealing art.
>AI has no intent or creativity, so it can be neither right nor wrong, neither good nor bad.
AI is just a wrapper around a tool - it doesn't need intention or creativity because those come from the user in the form of prompts (which are by definition intentional)
It's just a Natural Language Interface for calling CLI tools mostly, just like how GUIs are just graphical interfaces for calling CLI tools, but no one thinks a GUI has no intentionality or creativity even when using stochastic/probabilistic tools
Anything a user can do with an AI they could also do with a GUI, it would just take longer and more practice
>Either everything generative AI creates is slop or nothing is. So everything is.
But then how do you know something is slop before you know if it's made with GenAI? Does all art exist as Schrodinger's Slop until you can prove GenAI was used? (if that's even possible)
> * fully-generated content is public domain and copyright can not be applied to it.
Simpler yet - and inevitable, on sufficiently long time scales - is to dispense entirely with the notion of intellectual property and treat _all_ content this way.
This would remove the incentive to generate content, no? Copyright duration could be much shorter, but I think artists, writers, etc. would prefer the continuing protection of their work. (And I'm pro-copyright reform.)
Shouldn’t be particularly surprising Netflix is leaning in here - they’ve been pretty open about viewing themselves as “second screen”/background content for people doing other things. Their primary need these days is for a large volume of somewhat passable content, especially content they can get for cheap. Spotify’s in a similar boat and has been filling the recommended playlists up with low-royalty elevator music.
"Generated material is temporary and not part of the final deliverables" sounds like they are not looking to generative AI for content that they will air to the public.
Later on they do have a note suggesting that the following might be OK if you use judgement and get their approval: "Using GenAI to generate background elements (e.g., signage, posters) that appear on camera"
"If you can confidently say "yes" to all the above, socializing the intended use with your Netflix contact may be sufficient. If you answer “no” or “unsure” to any of these principles, escalate to your Netflix contact for more guidance before proceeding, as written approval may be required."
They do want to save money by cheaply generating content, but it's only cheap if no expensive lawsuits result. Hence the need for clear boundaries and legal review of uses that may be risky from a copyright perspective.
Yeah, I read the "Talent" section and it's very balanced. I can't see much, if anything, to complain about, so thank goodness for SAG-AFTRA. The strike a couple of years ago was well judged.
They also mention reputation / image in there. If I can’t tell something is generated by AI (some background image in a small part of a scene), it’s just CGI. But if its the uncanny valley view of a person/animal/thing that is clearly AI generated, that shows laziness.
wow this is actually really solid from netflix like it doesn’t just hype up ai but sets real boundaries too i like how they focus on consent and data safety instead of just “use ai for everything” feels like they actually understand the risks around creative work and performers kinda refreshing to see a big studio taking the responsible route
One of the issues with using LLMs in content generation is that instruction tuning causes mode collapse. For example, if you ask an LLM to generate a random number between 1 and 10, it might pick something like 7 80% of the time. Base models do not exhibit the same behavior.
“Creative Output” has an entirely different meaning when you start to think about them in the way they actually work.
Creativity is a really ill-defined term, but generally it has a lot more to do with abstract thinking and understanding subtlety and nuance than with mode collapse. Mode collapse affects variation, which is probably a part of of creativity for some definitions of it, but they aren't the same at all.
This reads like a reasonable policy. More broadly speaking re: AI content: Sure, boomers scrolling facebook will continue to enjoy their AI slop baby and animal videos, but I think the fact that the term "AI slop" has become so commonplace reflects a bias (generally) against AI-generated content.
Each time I scroll LinkedIn and I see some obviously AI produced images, with garbled text, etc. it immediately turns me off to whatever the content was associated with the image.
I'd be very disappointed to see the arts, including film making, shift away from the core of human expression.
“You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” - Joanna Maciejewka
> but I think the fact that the term "AI slop" has become so commonplace reflects a bias (generally) against AI-generated content.
Is that just because we are at the very beginning stages of the technology, though? It is just going to keep getting better, will the bias against AI generated content remain? I know people like to talk as if AI will always have the quality issues it has now, but I wouldn't count on that.
Is it going to get better? Because people have been saying that for years now, and while AI output is somewhat improved, many of the issues with it have not changed.
It's that not every one has the talent to produce something of quality.
If you give a professional passionate chef, the same ingredients for a full meal, as your average home cook the results will NOT be the same by a far stretch.
Much of "AI slop" is to content what Macdonald's is to food. Its technically edible but not high quality
That’s an interesting way to put it, which asks the bigger question of (perhaps?):
Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?
The people doing as such do not have the talent they desire, nor did they do anything to upskill themselves. Its a short cut to an illusion of competency.
> Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?
Change the statement to: Do we want a society where everyone can masquerade as an “photographer”, flooding society with low-quality photos using cell phones, never having to learn to develop film, or use focus, or understand lenses...
Do we want a society where everyone can masquerade as an “painter”, flooding society with low-quality paintings because acrylics are cheap, the old masters made their own paint after all...
Why does it matter how it was created? It wasn't Bob Ross's "Joy of Making Incredible Art", it was simply the "Joy of Painting".
And people do enjoy content that, for lack of a better word, is disposable. Look at the "short dramas" or "vertical dramas" industry that is making money hand over fist. The content isnt high brow, but people enjoy it all the same.
> AI trained on the work product of actual artists?
Should we teach people how to play guitar without using the songs of other artists? Should those artists be compensated for inspiring others?
Some of this is an artifact of our ability to sell reproductions (and I would argue that the economics were all around distribution).
There is a long (possibly decades) conversation that were going to have on this topic.
I know that people get very up in arms about AI in creative industries - but I feel like people don't necessarily understand that even in creative industries there is a LOT of monotonous, exploitative grunt work.
For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions. Miyazaki gets to design his beautiful characters - but the task of getting those characters to screen must be carried out by massive team of illustrators for whom "creative liberty" is a liability to their career.
(And this example is only for the creative aspects of film-making. There is a lot of normal corporate and logistical stuff that never even affects what you see)
That's not to say I'm looking forward to the wave of lazy AI-infused slop that is heading our way. But I also don't necessarily agree with the grandstanding that AI is inherently anti-creative or only destructive. I reserve the right to be open-minded.
The irony is that movies and TV themselves represented a cheaper, industrialized and commoditized alternative to theater. And theater is still around and just as good as it ever was.
>For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions. Miyazaki gets to design his beautiful characters - but the task of getting those characters to screen must be carried out by massive team of illustrators for whom "creative liberty" is a liability to their career.
This is vastly oversimplifying and is misleading. Key animators have a highly creative role. The small decisions in the movements, the timings, the shapes, even scene layouts (Miyazaki didn't draw every layout in The Boy and the Heron), are creative decisions that Miyazaki handpicked his staff on the basis of. Miyazaki conceived of the opening scene [0] in that film with Shinya Ohira as the animator in mind [1]. Even in his early films, when he was known to exert more control, animator Yoshinori Kanada's signature style is evident in the movements and effects [2].
> For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions.
Yes but at least those decisions come from some or one person not just an algorirhm
As an engineer and artist, I think a better comparison is painting -> photography. It took quite a while for photography to be considered an art, since it removed so much of the creative control from the artist. But it replaced them with new and different skills, particularly the value of curation.
Some skills, like framing, values, balance, etc. become even more important differentiators. Yes, it is much different. But as long as humans are in the loop, there is an opportunity for human communication.
>Some skills, like framing, values, balance, etc. become even more important differentiators.
I agree. I think many artists in the future will be closer to directors/cinematographers/editors than performers
Many of the skills artists have today will still be necessary and transferable, but what will separate the good artists from the bad artists will be their ability to communicate their ideas to agents / other humans
Same with software developers I suspect - communication will be the most important skill of all, and the rockstar loner devs who don't work well in teams will slowly phase out
I'm curious if the parent poster thinks this is unique to film production, because I think you can make the same argument for pretty much any trade. Software engineering is 1% brilliance and 99% grunt work. This doesn't make that software engineers are going to enjoy a world where 99% of their job goes away.
Further, I'm not sure the customers will, because the fact that human labor is comparatively expensive puts some checks and balances in place. If content generation is free, the incentive is to produce higher-volume but lower-quality output, and it's a race to the bottom. In the same way, when content-farming and rage-baiting became a way to make money, all the mainstream "news" publishers converged on that.
Should we be optimising for a world that makes software engineers (or animators) in particular happy? The seen is the lost jobs but the unseen is that everyone else gets software (and animated entertainment) cheaper.
As it happens, I don't think "AI" is close to replacing many SEs or animators but in a world where it could, we should celebrate this huge boon to society.
Doesn’t seem likely that adobe has a owned collection of content big enough. Seems very likely that they just deemed the legal risk to be outweighed by commercial opportunity. They kinda had to - a product that generates stuff that gets you sued is not worth paying whatever they charge for their subscription
If you take a model trained on Getty and ask it for Indiana Jones or Harry Potter, what does it give you? These things are popular enough that it's likely to be present in any large set of training data, either erroneously or because some specific works incorporated them in a way that was licensed or fair use for those particular works even if it isn't in general.
And then when it conjures something like that by description rather than by name, how are you any better off than something trained from random social media? It's not like you get to make unlicensed AI India Jones derivatives just because Getty has a photo of Harrison Ford.
I work in this space. In traditional diffusion-based regimes (paired image and text), one can absolutely check the text to remove all occurrences of Indiana Jones. Likewise, Adobe Stock has content moderation that ensures (up to human moderation limit) no dirty content. It is a world without Indiana Jones to the model
I don't know the data distribution, but are you sure that's generated by an Adobe model? I can only see that it is in Stock + it is tagged as AI generated (that is, was that image generated by some other model?)
Disclaimer: I used to work at Adobe GenAI. Opinions are of my own ofc.
Yeah, there's no way Indiana Jones was not in the training data that created that image. To even say it's not in there is James Clapper in front of Congress level lying.
> one can absolutely check the text to remove all occurrences of Indiana Jones
How do you handle this kind of prompt:
“Generate an image of a daring, whip-wielding archaeologist and adventurer, wearing a fedora hat and leather jacket. Here's some back-story about him: With a sharp wit and a knack for languages, he travels the globe in search of ancient artifacts, often racing against rival treasure hunters and battling supernatural forces. His adventures are filled with narrow escapes, booby traps, and encounters with historical and mythical relics. He’s equally at home in a university lecture hall as he is in a jungle temple or a desert ruin, blending academic expertise with fearless action. His journey is as much about uncovering history’s secrets as it is about confronting his own fears and personal demons.”
Try copy-pasting it in any image generation model. It looks awfully like Indiana Jones for all my attempts, yet I've not referenced Indiana Jones even once!
It comes down to who is liable for the edge cases, I suspect. Adobe will compensate the end user if they get sued for using a Firefly-generated image (probably up to some limit).
Getting sued occasionally is a cost of doing business in some industries. It’s about risk mitigation rather than risk elimination.
Feels like "paying extra for the extended warranty" vibes. What it covers isn't much (do you expect someone to come after you in small claims court and if they do, was that your main concern?) meanwhile the big claim you're actually worried about is what it doesn't cover.
And if you really wanted insurance then why not get it from an actual insurance company?
Because almost everything is risk mitigation or reduction, not elimination.
In particular, in the US, the legal apparatus has been gamified to the point that the expectation becomes people will sue if their expected value out of it is positive even if the case is insane on its merits, because it's much more likely someone with enough risk and cost will settle as the cheaper option.
And in that world, there is nothing that completely eliminates the risk of being sued in bad faith - but the more things you put in your mitigation basket, the narrower the error bars are on the risk even if the 99.999th percentile is still the same.
I think it would be very, very difficult - almost impossible - to create a dataset to train an image generator that doesn't contain any copyrighted material that you don't have the rights to. There's the obvious stuff like Mickey Mouse or Superman, you just run some other tool over it to filter them out, but there are so many ridiculous things that can be copyrighted (depictions of buildings, tattoos), things like crowd shots, pictures of cities that have ads in the background, that I don't know how you could do it. I'm sure even Adobe's stock library would have a lot of violations like that.
Whistleblowers, corporate leaks, output resembling copyrighted content etc.
Basically it feels it's the same as the companies who unlawfully use licensed code as their own (e.g. without respecting GPL license)
Consumers have long wanted a single place to access all content. Netflix was probably the closest that ever got, and even then it had regional difficulties. As competitors rose, they stopped licensing their content to netflix, and netflix is now arguably just another face in the crowd.
Now they want to go and leverage AI to produce more content and bam, stung by the same bee. No one is going to license their content for training, if the results of that training will be used in perpetuity. They will want a permanent cut. Which means they either need to support fair use, or more likely, they will all put up a big wall and suck eggs.
>GenAI is not used to replace or generate new talent performances
This is 100% a lie.
Studios will use this to replace humans. In fact, the idea is for the technology – AI in general – to be so good you don't need humans anywhere in the pipeline. Like, the best thing a human could produce would only be as good as the average output of their model, except the model would be far cheaper and faster.
And... that's okay, honestly. I mean, it's a capitalism problem. I believe with all my strength that this automation is fundamentally different from the ones from back in the day. There won't be new jobs.
Eventually consumers will use the technology to replace studios.
Any studios that isn't playing ostrich has realized this (so possibly none of them) and should be just trying to extract as much value as possible as quickly as possible before everything goes belly up.
Of course timelines are still unclear. It could be 5 years or 20, but it is coming.
The issue wasn't if they said that thing or not; companies say a lot of things which are fundamentally a lie, things to keep up appearances – which are oftentimes not enforced. It's like companies arguing they believe in fair pay while using Chinese sweatshops or whatever.
In this case, for instance, Netflix still has a relation with their partners that they don't want to damage at this moment, and we are not at the point of AI being able to generate a whole feature length film indistinguishable from a traditional one . Also, they might be apprehensive regarding legal risks and the copyrightability at this exact moment; big companies' lawyers are usually pretty conservative regarding taking any "risks," so they probably want to wait for the dust to settle down as far as legal precedents and the like.
Anyway, the issue here is:
"Does that statement actually reflect what Netflix truly think and that they actually believe GenAI shouldn't be used to replace or generate new talent performances?"
Because they believe in the sanctity of human authorship or whatever? And the answer is: no, no, hell no, absolutely no. That is a lie.
"Does that statement actually reflect what Netflix truly think and that they actually believe GenAI shouldn't be used to replace or generate new talent performances?"
The if-statement "If you want to do X, you need to get approval." probably does actually reflect what Netflix truly think, but it doesn't mean they believe X shouldn't be done. It means they believe X is risky and they want to be in control of whether X is done or not.
I don't see how you could read the article and come away with the impression that Netflix believe GenAI shouldn't be used to replace or generate new talent performances.
I’m inclined to agree. The goalposts will move once the time is right. I’ve already personally witnessed it happening; a company sells their AI-whatever strictly along the lines of staff augmentation and a force multiplier for employees. Not a year later and the marketing has shifted to cost optimization, efficiency, and better “uptime” over real employees.
The truth is that Netflix, Amazon, or any other company, honestly, would fire 99% of their workforce if it were possible, because they only care about profit – hell, they are companies, that's why they exist. At the same time, brands have to pretend they care about society, people having jobs, the climate, whatever, so they can't simply say: "Yeah, we exist to make money and we totally want to fire you guys as soon as possible." As you said, it's all masked as staff augmentation and other technical mumbo jumbo.
>GenAI is not used to replace or generate new talent performances
>> This is 100% a lie.
We’ve had CGI for decades and generally don’t mind. However, the point at which AI usage becomes a negative (eg: the content appears low quality) because of its usage, I’d expect some backlash and pulling back in the industry.
In film and tv, customers have so much choice. If a film or tv is low effort, it’s likely going to get low ratings.
Every business and industry is obviously incentivized to cut costs, but, if those cost cuts directly affect the reputation and imagery of your final product, you probably want to choose wisely which things you cut..
I think you're right, in general - certainly AI will replace background actors, though that's already been happening for years without AI generation. I'm also pretty sure that if/when AI can generate whole films, then that'll happen, too.
However, this statement is a hell of a lot better than I expected to see, and suggests to me that the actors' strike a few years ago was necessary and successful. It may, as you say, only be holding back the "capitalism problem" dike, but... At least it's doing that?
I would somewhat disagree with this statement being a sign the strike was a success because, like, AI is not at the point of generating a whole movie in human quality today, so Netflix issuing this statement like this now, in November 2025, costs them literally nothing, and feels more like a consolation prize: "Here, take this statement, so you guys can pretend the strike achieved anything."
When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't delay Netflix embracing AI films that much, if anything.
> I would somewhat disagree with this statement being a sign the strike was a success because, like, AI is not at the point of generating a whole movie in human quality today, so Netflix issuing this statement like this now, in November 2025, costs them literally nothing, and feels more like a consolation prize: "Here, take this statement, so you guys can pretend the strike achieved anything."
>
> When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't have delay Netflix embracing AI films that much, if anything.
There’s no guarantee AI will get good enough to replace anyone. We’ve pretty much run out of training data at this point. I’m a little annoyed that people speak about future progress like it’s an inevitability.
I suspect that if GenAI starts to make content which can grab people's attention, and do it cheaply, then Netflix will become far more accommodating very quickly.
Just look at early 20s people. They don't watch shows/movies. They only watch short form videos. Short form videos will mostly be created using GenAI tools as early as 2026.
Common-sense, practical, and covers a lot of the shifting ground around an artist’s ability to withdraw consent under GDPR and the ways they can properly use this to prevent their likenesses being used to train their digital replacements.
(Equity is the UK equivalent of the AEA and SAG-AFTRA combined)
I am thinking of building an association of AI consumers so we can organize to praise or boycott whatever we collectevily find acceptable or not. I'll spend some time reading this in details later on, but whatever it states or imply, positive or negative, it's not for businesses to set the rules as if they owned the place. Consumer associations are powerful and can't be fired when striking, since the customer is always right.
It's interesting that they don't explicitly state the fact that AI-generated content cannot be copyrighted. They seem to dance around that. The provision against generating a major character is about respect for talent and so on, rather than the fact that that would make the major character public domain and therefore able to be used by anyone for anything.
I wonder if we're going to see a push back by media companies around copyright over AI-generated content. Though I don't see how; copyright is explicitly an artificial legal protection of human works.
Netflix is basically strangling the creative potential of GenAI before it can even breathe. Their new “guidelines” read like a corporate legal panic document, not a policy for innovation. Every use case needs escalation, approval, or a lawyer’s blessing. That’s not how creativity works.
The irony is rich they built their empire on disrupting old Hollywood gatekeeping, and now they’re recreating it in AI form. Instead of letting creators experiment freely with these tools, Netflix wants control over every brushstroke of ai creativity
This actually looks pretty good. The key takeaway I got was that they know their business is dependent upon Intellectual Property rights, and that Generative AI in final outputs or productive work undermines the foundation of their future success vis a vis discounting or dismissing IP Law and Rights.
That’s likely to be the middle ground going forward for the smarter creative companies, and I’m personally all for it. Sure, use it for a pitch, or a demo, or a test - but once there’s money on the line (copyright in particular), get that shit outta there because we can’t own something we stole from someone else.
Or they can do like Call of duty, that just makes skins "heavily inspired" by other franchises they don't own, the week Borderlands 4 came out they put a few cell shaded skins that heavily resembles the look of that game's characters, there is one that skin that is pretty much like reptile from mortal Kombat called "vibrant serpent", they got a bit of heat in May of this year for releasing a skin that looked too much like one from another game called High On Life, and the list goes on. It reminds me a lot of the disguises they sell on Spirit Halloween during every October.
And yes I know they do legal and agreed partnerships like with the Predator franchise, or the Beavis and Butt-Head franchise (yes they exist in CoD now...), and those only count for a tiny number of the premium skins.
The Call of Duty series makes me so sad. I remember when cod 4 came out it felt like a genuinely groundbreaking and innovative thing and I was so pumped to see what IW did next. And then Activision took all of that talent that was genuinely exploring new ground in game development and stuck them in the yearly rerelease of the same damn game mill until everyone got burnt out and left.
For the record, Arc Raiders (just released) makes me feel like I'm back playing MW2 in the golden days. Just in the sense of playing an awesome game and riding the wave of popularity with everyone else.
Thanks, I'd heard whispers but hadn't jumped in yet. I will need to check this out.
(platinum rating on protondb too woohoo)
I've been trying to find time here and there to get the tumbleweeds out of my gaming pc just so I can try that game. Reviews and streams for it remind me a bit of the Dark Zone experience when the first Division game came out.
Arc Raiders, and their previous game The Finals, uses AI in some capacity for Voice Acting - though they do still hire VA and make it explicit in their contract offer
>Some of the voice lines were created using generative artificial intelligence tools, using an original sample of voice lines from voice actors hired specifically with an AI-use contractual clause, similar to the studio's production process in The Finals.
https://en.wikipedia.org/wiki/ARC_Raiders
Great game though, I'm really enjoying it too
Unfortunately games playerbases don't stick around long enough anymore for grinding hard enough to be worth it.
I thought they were on biyearly swapping with treyarch?
Cod4 in some ways was the beginning of the end for a lot that we took for granted in gaming up to that point. I remember when it released and a couple of us went to my friends house to play it. Boy were we in for a shock when there was no coop multiplayer like halo 3.
If MW didn't have co-op multiplayer on console than that's another example of the Mandela effect.
It had split-screen local multiplayer, but you couldn't play online in that configuration
Not me, the mix of parkour with multiplayer shooting with beautiful highly detailed maps it's something I like a lot, nothing even compares in that regard, I know the game is a shameless skin store but I do appreciate the former, although I also hate how small a lot of maps are, glances at Nuketown
They stole all the parkour stuff from Titanfall, which was made by the original IW founders when they left and founded Respawn ;)
(I use "stole" in a non derogatory way here - 90% of good game design is cribbing together stuff that worked elsewhere in a slightly new form)
> They stole all the parkour stuff from Titanfall
Which in turn was likely quite inspired by Starsiege: Tribes
Totally, Titanfall 2 is one of my favorite games ever, but by the time I discovered the multiplayer was pretty much dead, no players and no recent updates.
Good single player campaign too, if anyone is interested
I hate how parkour infested the fps genre. There's this whole meta now that I don't care about at all yet one has to learn if you don't want to go 3 and 12 and its in most games now.
It has been that way for decades, but prior the parkour stuff was exploiting bugs in game engines and only the top 1% or less of players could even pull off the complex inputs needed.
Personally, I was in the top 10% of HL2DM players but because I couldn't master the inputs for skating I wasn't able to compete with the truly elite tier players who would zip around the map at breakneck speeds.
> get that shit outta there because we can’t own something we stole from someone else
How does anyone prove it though? You can say "does that matter?" but once everybody starts doing it, it becomes a different story.
It's partly about Netflix getting sued by someone claiming infringement, but also partly (maybe mostly) about Netflix maintaining their right to sue others for infringement.
The scenario looks like this:
* Be Netflix. Own some movie or series where the main elements (plot, characters, setting) were GenAI-created.
* See someone else using your plot/characters/setting in their own for-profit works.
* Try suing that someone else for copyright infringement.
* Get laughed out of court because the US Copyright Office has already said that GenAI is not copyrightable. [1]
[1] https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
You need to say you improved on the work of AI and it's yours.
Now you can sue
This scenario only plays out if it is known what was or wasn't made with GenAI.
It would become known during discovery.
How can you find out if an AI created something versus a human with a pixel editor?
Are you kidding me ? Everyone knows it's pirated content (aka stealing), there are a ton of proofs here and there:
- https://arstechnica.com/tech-policy/2025/02/meta-torrented-o... - https://news.bloomberglaw.com/ip-law/openai-risks-billions-a...
Other than that, just a bit of common sense tells you all you need to know about where the data comes from (datasets never released, outputs of the LLMs suspisciously close to original copyrighted content, AI founders openly saying that paying for copyrighted content is too costly etc. etc. etc.)
Any one with a brain knows it is not stolen, but nevertheless the fact that people will claim so is a risk.
It is stolen on a cultural level at least.
But since many of these models will blurt out very obviously infringing material without targeted prompting, it’s also an active, continuous thief.
Yeah. No. This document says, “our strategy is wait and see.” It’s the most disruptive media technology since the TV. And they’re like, “whatever.” That is not the move of a “smarter” creative company. Lawyers are really, really bad at running companies, even if you have strong opinions about the law.
Disruptive does not mean good, or useful, or important, or valuable. There is no reason to jump onto a thing early just because it is disruptive: Netflix exists in a different creative world than the tech industry, and its audiences are even more hostile to the idea that AI is being used to steal from the things and people they admire than the audiences of typical tech industry disruptions. People who care about art and artists and films and actors tend not to value slop.
Nobody values slop, and not everything is slop, AI or otherwise. Also, stealing is not the same as copyright infringement, unless you subscribe to the RIAA definition of the word.
AI has no intent or creativity, so it can be neither right nor wrong, neither good nor bad.
So just as there's no procedural difference between an AI getting something right and an AI "hallucinating", if the word "slop" describes anything AI generates, it describes all of it.
Either everything generative AI creates is slop or nothing is. So everything is.
Also I know stealing is not the same thing as copyright infringement. I'm talking about stealing livelihoods as much as stealing art.
>AI has no intent or creativity, so it can be neither right nor wrong, neither good nor bad.
AI is just a wrapper around a tool - it doesn't need intention or creativity because those come from the user in the form of prompts (which are by definition intentional)
It's just a Natural Language Interface for calling CLI tools mostly, just like how GUIs are just graphical interfaces for calling CLI tools, but no one thinks a GUI has no intentionality or creativity even when using stochastic/probabilistic tools
Anything a user can do with an AI they could also do with a GUI, it would just take longer and more practice
>Either everything generative AI creates is slop or nothing is. So everything is.
But then how do you know something is slop before you know if it's made with GenAI? Does all art exist as Schrodinger's Slop until you can prove GenAI was used? (if that's even possible)
I see a big one missing:
* fully-generated content is public domain and copyright can not be applied to it.
Make sure any AI content gets substantially changed by humans, so that the result can be copyrighted.
More importantly: don't brag and shut up about which parts are fully AI generated.
Otherwise: public domain.
> * fully-generated content is public domain and copyright can not be applied to it.
Simpler yet - and inevitable, on sufficiently long time scales - is to dispense entirely with the notion of intellectual property and treat _all_ content this way.
This would remove the incentive to generate content, no? Copyright duration could be much shorter, but I think artists, writers, etc. would prefer the continuing protection of their work. (And I'm pro-copyright reform.)
Shouldn’t be particularly surprising Netflix is leaning in here - they’ve been pretty open about viewing themselves as “second screen”/background content for people doing other things. Their primary need these days is for a large volume of somewhat passable content, especially content they can get for cheap. Spotify’s in a similar boat and has been filling the recommended playlists up with low-royalty elevator music.
"Generated material is temporary and not part of the final deliverables" sounds like they are not looking to generative AI for content that they will air to the public.
Later on they do have a note suggesting that the following might be OK if you use judgement and get their approval: "Using GenAI to generate background elements (e.g., signage, posters) that appear on camera"
"If you can confidently say "yes" to all the above, socializing the intended use with your Netflix contact may be sufficient. If you answer “no” or “unsure” to any of these principles, escalate to your Netflix contact for more guidance before proceeding, as written approval may be required."
They do want to save money by cheaply generating content, but it's only cheap if no expensive lawsuits result. Hence the need for clear boundaries and legal review of uses that may be risky from a copyright perspective.
Yeah, that's a fair assessment. The specific mention of "union-covered work" plays to that interpretation as well:
> GenAI is not used to replace or generate new talent performances or union-covered work without consent.
Yeah, I read the "Talent" section and it's very balanced. I can't see much, if anything, to complain about, so thank goodness for SAG-AFTRA. The strike a couple of years ago was well judged.
They also mention reputation / image in there. If I can’t tell something is generated by AI (some background image in a small part of a scene), it’s just CGI. But if its the uncanny valley view of a person/animal/thing that is clearly AI generated, that shows laziness.
Yup. Everything will be muzak in the end.
But what word should we coin as buzzword for “Netflix-Muzak”?
And when we're saturated with it all, we'll start buying DVDs (or other future media) again.
Tbh I think theseguidelines are just anticipating future trends.
Having spent some time in post-production, this reads more like a “please don’t get us sued”
wow this is actually really solid from netflix like it doesn’t just hype up ai but sets real boundaries too i like how they focus on consent and data safety instead of just “use ai for everything” feels like they actually understand the risks around creative work and performers kinda refreshing to see a big studio taking the responsible route
One of the issues with using LLMs in content generation is that instruction tuning causes mode collapse. For example, if you ask an LLM to generate a random number between 1 and 10, it might pick something like 7 80% of the time. Base models do not exhibit the same behavior.
“Creative Output” has an entirely different meaning when you start to think about them in the way they actually work.
Creativity is a really ill-defined term, but generally it has a lot more to do with abstract thinking and understanding subtlety and nuance than with mode collapse. Mode collapse affects variation, which is probably a part of of creativity for some definitions of it, but they aren't the same at all.
This reads like a reasonable policy. More broadly speaking re: AI content: Sure, boomers scrolling facebook will continue to enjoy their AI slop baby and animal videos, but I think the fact that the term "AI slop" has become so commonplace reflects a bias (generally) against AI-generated content.
Each time I scroll LinkedIn and I see some obviously AI produced images, with garbled text, etc. it immediately turns me off to whatever the content was associated with the image.
I'd be very disappointed to see the arts, including film making, shift away from the core of human expression.
“You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” - Joanna Maciejewka
> but I think the fact that the term "AI slop" has become so commonplace reflects a bias (generally) against AI-generated content.
Is that just because we are at the very beginning stages of the technology, though? It is just going to keep getting better, will the bias against AI generated content remain? I know people like to talk as if AI will always have the quality issues it has now, but I wouldn't count on that.
I'm not convinced that AI image generation _is_ getting better at this point. If anything, it seems to be getting somewhat weirder-looking.
Like, I gather that prompt adherence has improved somewhat, but the actual output still looks _very_ off.
Is it going to get better? Because people have been saying that for years now, and while AI output is somewhat improved, many of the issues with it have not changed.
The problem with AI slop isnt the AI part.
It's that not every one has the talent to produce something of quality.
If you give a professional passionate chef, the same ingredients for a full meal, as your average home cook the results will NOT be the same by a far stretch.
Much of "AI slop" is to content what Macdonald's is to food. Its technically edible but not high quality
That’s an interesting way to put it, which asks the bigger question of (perhaps?):
Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?
The people doing as such do not have the talent they desire, nor did they do anything to upskill themselves. Its a short cut to an illusion of competency.
> Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?
Change the statement to: Do we want a society where everyone can masquerade as an “photographer”, flooding society with low-quality photos using cell phones, never having to learn to develop film, or use focus, or understand lenses...
Do we want a society where everyone can masquerade as an “painter”, flooding society with low-quality paintings because acrylics are cheap, the old masters made their own paint after all...
Why does it matter how it was created? It wasn't Bob Ross's "Joy of Making Incredible Art", it was simply the "Joy of Painting".
And people do enjoy content that, for lack of a better word, is disposable. Look at the "short dramas" or "vertical dramas" industry that is making money hand over fist. The content isnt high brow, but people enjoy it all the same.
> AI trained on the work product of actual artists?
Should we teach people how to play guitar without using the songs of other artists? Should those artists be compensated for inspiring others?
Some of this is an artifact of our ability to sell reproductions (and I would argue that the economics were all around distribution).
There is a long (possibly decades) conversation that were going to have on this topic.
I think that's unfair to Mcdonalds
I know that people get very up in arms about AI in creative industries - but I feel like people don't necessarily understand that even in creative industries there is a LOT of monotonous, exploitative grunt work.
For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions. Miyazaki gets to design his beautiful characters - but the task of getting those characters to screen must be carried out by massive team of illustrators for whom "creative liberty" is a liability to their career.
(And this example is only for the creative aspects of film-making. There is a lot of normal corporate and logistical stuff that never even affects what you see)
That's not to say I'm looking forward to the wave of lazy AI-infused slop that is heading our way. But I also don't necessarily agree with the grandstanding that AI is inherently anti-creative or only destructive. I reserve the right to be open-minded.
The irony is that movies and TV themselves represented a cheaper, industrialized and commoditized alternative to theater. And theater is still around and just as good as it ever was.
>For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions. Miyazaki gets to design his beautiful characters - but the task of getting those characters to screen must be carried out by massive team of illustrators for whom "creative liberty" is a liability to their career.
This is vastly oversimplifying and is misleading. Key animators have a highly creative role. The small decisions in the movements, the timings, the shapes, even scene layouts (Miyazaki didn't draw every layout in The Boy and the Heron), are creative decisions that Miyazaki handpicked his staff on the basis of. Miyazaki conceived of the opening scene [0] in that film with Shinya Ohira as the animator in mind [1]. Even in his early films, when he was known to exert more control, animator Yoshinori Kanada's signature style is evident in the movements and effects [2].
[0]: https://www.sakugabooru.com/post/show/260429
[1]: https://fullfrontal.moe/takeshi-honda-the-boy-and-the-heron-...
[2]: Search for "Kanada animated many sequences of the movie, but let’s just focus on the most famous one, the air battle scene." in https://animetudes.com/2021/05/15/directing-kanada/
> For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions.
Yes but at least those decisions come from some or one person not just an algorirhm
As an engineer and artist, I think a better comparison is painting -> photography. It took quite a while for photography to be considered an art, since it removed so much of the creative control from the artist. But it replaced them with new and different skills, particularly the value of curation.
Some skills, like framing, values, balance, etc. become even more important differentiators. Yes, it is much different. But as long as humans are in the loop, there is an opportunity for human communication.
>Some skills, like framing, values, balance, etc. become even more important differentiators.
I agree. I think many artists in the future will be closer to directors/cinematographers/editors than performers
Many of the skills artists have today will still be necessary and transferable, but what will separate the good artists from the bad artists will be their ability to communicate their ideas to agents / other humans
Same with software developers I suspect - communication will be the most important skill of all, and the rockstar loner devs who don't work well in teams will slowly phase out
As a software engineer you still make the hard decisions and let claude type them out for you. Isn't it similar?
I mean, yeah. No matter how you feel about AI and creativity, having AI make the creative choices is dumb and backwards.
What happens to the illustrators now?
They get to frolic on a farm upstate.
I'm curious if the parent poster thinks this is unique to film production, because I think you can make the same argument for pretty much any trade. Software engineering is 1% brilliance and 99% grunt work. This doesn't make that software engineers are going to enjoy a world where 99% of their job goes away.
Further, I'm not sure the customers will, because the fact that human labor is comparatively expensive puts some checks and balances in place. If content generation is free, the incentive is to produce higher-volume but lower-quality output, and it's a race to the bottom. In the same way, when content-farming and rage-baiting became a way to make money, all the mainstream "news" publishers converged on that.
Should we be optimising for a world that makes software engineers (or animators) in particular happy? The seen is the lost jobs but the unseen is that everyone else gets software (and animated entertainment) cheaper.
As it happens, I don't think "AI" is close to replacing many SEs or animators but in a world where it could, we should celebrate this huge boon to society.
> Using unowned training data (e.g., celebrity faces, copyrighted art)
How would one ever know that the GenAI output is not influenced or based on copyrighted content.
Getty and Adobe offer models that were trained only on images that they have the rights to. Those models might meet Netflix’s standards?
Doesn’t seem likely that adobe has a owned collection of content big enough. Seems very likely that they just deemed the legal risk to be outweighed by commercial opportunity. They kinda had to - a product that generates stuff that gets you sued is not worth paying whatever they charge for their subscription
I kind of wonder if that even works.
If you take a model trained on Getty and ask it for Indiana Jones or Harry Potter, what does it give you? These things are popular enough that it's likely to be present in any large set of training data, either erroneously or because some specific works incorporated them in a way that was licensed or fair use for those particular works even if it isn't in general.
And then when it conjures something like that by description rather than by name, how are you any better off than something trained from random social media? It's not like you get to make unlicensed AI India Jones derivatives just because Getty has a photo of Harrison Ford.
I work in this space. In traditional diffusion-based regimes (paired image and text), one can absolutely check the text to remove all occurrences of Indiana Jones. Likewise, Adobe Stock has content moderation that ensures (up to human moderation limit) no dirty content. It is a world without Indiana Jones to the model
If you ask the Adobe stock image generation for "Adventurer with a whip and hat portrait view , Brown leather hat, jacket, close-up"
It gives you an image of Harrison Ford dressed like Indiana Jones.
https://stock.adobe.com/ca/images/adventurer-with-a-whip-and...
I don't know the data distribution, but are you sure that's generated by an Adobe model? I can only see that it is in Stock + it is tagged as AI generated (that is, was that image generated by some other model?)
Disclaimer: I used to work at Adobe GenAI. Opinions are of my own ofc.
Yeah, there's no way Indiana Jones was not in the training data that created that image. To even say it's not in there is James Clapper in front of Congress level lying.
> one can absolutely check the text to remove all occurrences of Indiana Jones
How do you handle this kind of prompt:
“Generate an image of a daring, whip-wielding archaeologist and adventurer, wearing a fedora hat and leather jacket. Here's some back-story about him: With a sharp wit and a knack for languages, he travels the globe in search of ancient artifacts, often racing against rival treasure hunters and battling supernatural forces. His adventures are filled with narrow escapes, booby traps, and encounters with historical and mythical relics. He’s equally at home in a university lecture hall as he is in a jungle temple or a desert ruin, blending academic expertise with fearless action. His journey is as much about uncovering history’s secrets as it is about confronting his own fears and personal demons.”
Try copy-pasting it in any image generation model. It looks awfully like Indiana Jones for all my attempts, yet I've not referenced Indiana Jones even once!
Emmmm sure, but throw this to a human artist who has not heard of Indiana Jones and see if they draw something alike.
It comes down to who is liable for the edge cases, I suspect. Adobe will compensate the end user if they get sued for using a Firefly-generated image (probably up to some limit).
Getting sued occasionally is a cost of doing business in some industries. It’s about risk mitigation rather than risk elimination.
Feels like "paying extra for the extended warranty" vibes. What it covers isn't much (do you expect someone to come after you in small claims court and if they do, was that your main concern?) meanwhile the big claim you're actually worried about is what it doesn't cover.
And if you really wanted insurance then why not get it from an actual insurance company?
Because almost everything is risk mitigation or reduction, not elimination.
In particular, in the US, the legal apparatus has been gamified to the point that the expectation becomes people will sue if their expected value out of it is positive even if the case is insane on its merits, because it's much more likely someone with enough risk and cost will settle as the cheaper option.
And in that world, there is nothing that completely eliminates the risk of being sued in bad faith - but the more things you put in your mitigation basket, the narrower the error bars are on the risk even if the 99.999th percentile is still the same.
All the indemnities I’ve read have clauses though that say if you intentionally use it to make something copyrighted they won’t protect you.
So if you put obviously copyrighted things in the prompt you’ll still be on your own.
Adobe Firefly absolutely has a spider man problem.
I think it would be very, very difficult - almost impossible - to create a dataset to train an image generator that doesn't contain any copyrighted material that you don't have the rights to. There's the obvious stuff like Mickey Mouse or Superman, you just run some other tool over it to filter them out, but there are so many ridiculous things that can be copyrighted (depictions of buildings, tattoos), things like crowd shots, pictures of cities that have ads in the background, that I don't know how you could do it. I'm sure even Adobe's stock library would have a lot of violations like that.
Whistleblowers, corporate leaks, output resembling copyrighted content etc. Basically it feels it's the same as the companies who unlawfully use licensed code as their own (e.g. without respecting GPL license)
Netflix could also use or provide their own TV/movie productions as training data.
Lionsgate tried that and found that even their entire archive wasn't nearly enough to produce a useful model: https://www.thewrap.com/lionsgate-runway-ai-deal-ip-model-co... and https://futurism.com/artificial-intelligence/lionsgate-movie...
This amuses me.
Consumers have long wanted a single place to access all content. Netflix was probably the closest that ever got, and even then it had regional difficulties. As competitors rose, they stopped licensing their content to netflix, and netflix is now arguably just another face in the crowd.
Now they want to go and leverage AI to produce more content and bam, stung by the same bee. No one is going to license their content for training, if the results of that training will be used in perpetuity. They will want a permanent cut. Which means they either need to support fair use, or more likely, they will all put up a big wall and suck eggs.
Maybe now all that product placement is finally coming back to haunt them.
So they admit it. They don’t make movies they produce content.
>GenAI is not used to replace or generate new talent performances
This is 100% a lie.
Studios will use this to replace humans. In fact, the idea is for the technology – AI in general – to be so good you don't need humans anywhere in the pipeline. Like, the best thing a human could produce would only be as good as the average output of their model, except the model would be far cheaper and faster.
And... that's okay, honestly. I mean, it's a capitalism problem. I believe with all my strength that this automation is fundamentally different from the ones from back in the day. There won't be new jobs.
But the solution was never to ban technology
Eventually consumers will use the technology to replace studios.
Any studios that isn't playing ostrich has realized this (so possibly none of them) and should be just trying to extract as much value as possible as quickly as possible before everything goes belly up.
Of course timelines are still unclear. It could be 5 years or 20, but it is coming.
The part you quote is part of the list of conditions for an if-statement, so how could it be a lie?
The issue wasn't if they said that thing or not; companies say a lot of things which are fundamentally a lie, things to keep up appearances – which are oftentimes not enforced. It's like companies arguing they believe in fair pay while using Chinese sweatshops or whatever.
In this case, for instance, Netflix still has a relation with their partners that they don't want to damage at this moment, and we are not at the point of AI being able to generate a whole feature length film indistinguishable from a traditional one . Also, they might be apprehensive regarding legal risks and the copyrightability at this exact moment; big companies' lawyers are usually pretty conservative regarding taking any "risks," so they probably want to wait for the dust to settle down as far as legal precedents and the like.
Anyway, the issue here is:
"Does that statement actually reflect what Netflix truly think and that they actually believe GenAI shouldn't be used to replace or generate new talent performances?"
Because they believe in the sanctity of human authorship or whatever? And the answer is: no, no, hell no, absolutely no. That is a lie.
"Does that statement actually reflect what Netflix truly think and that they actually believe GenAI shouldn't be used to replace or generate new talent performances?"
The if-statement "If you want to do X, you need to get approval." probably does actually reflect what Netflix truly think, but it doesn't mean they believe X shouldn't be done. It means they believe X is risky and they want to be in control of whether X is done or not.
I don't see how you could read the article and come away with the impression that Netflix believe GenAI shouldn't be used to replace or generate new talent performances.
I’m inclined to agree. The goalposts will move once the time is right. I’ve already personally witnessed it happening; a company sells their AI-whatever strictly along the lines of staff augmentation and a force multiplier for employees. Not a year later and the marketing has shifted to cost optimization, efficiency, and better “uptime” over real employees.
The truth is that Netflix, Amazon, or any other company, honestly, would fire 99% of their workforce if it were possible, because they only care about profit – hell, they are companies, that's why they exist. At the same time, brands have to pretend they care about society, people having jobs, the climate, whatever, so they can't simply say: "Yeah, we exist to make money and we totally want to fire you guys as soon as possible." As you said, it's all masked as staff augmentation and other technical mumbo jumbo.
>GenAI is not used to replace or generate new talent performances
>> This is 100% a lie.
We’ve had CGI for decades and generally don’t mind. However, the point at which AI usage becomes a negative (eg: the content appears low quality) because of its usage, I’d expect some backlash and pulling back in the industry.
In film and tv, customers have so much choice. If a film or tv is low effort, it’s likely going to get low ratings.
Every business and industry is obviously incentivized to cut costs, but, if those cost cuts directly affect the reputation and imagery of your final product, you probably want to choose wisely which things you cut..
I think you're right, in general - certainly AI will replace background actors, though that's already been happening for years without AI generation. I'm also pretty sure that if/when AI can generate whole films, then that'll happen, too.
However, this statement is a hell of a lot better than I expected to see, and suggests to me that the actors' strike a few years ago was necessary and successful. It may, as you say, only be holding back the "capitalism problem" dike, but... At least it's doing that?
I would somewhat disagree with this statement being a sign the strike was a success because, like, AI is not at the point of generating a whole movie in human quality today, so Netflix issuing this statement like this now, in November 2025, costs them literally nothing, and feels more like a consolation prize: "Here, take this statement, so you guys can pretend the strike achieved anything."
When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't delay Netflix embracing AI films that much, if anything.
> I would somewhat disagree with this statement being a sign the strike was a success because, like, AI is not at the point of generating a whole movie in human quality today, so Netflix issuing this statement like this now, in November 2025, costs them literally nothing, and feels more like a consolation prize: "Here, take this statement, so you guys can pretend the strike achieved anything."
>
> When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't have delay Netflix embracing AI films that much, if anything.
There’s no guarantee AI will get good enough to replace anyone. We’ve pretty much run out of training data at this point. I’m a little annoyed that people speak about future progress like it’s an inevitability.
You’re saying their statement about what is happening is a lie because of what you predict will happen…
Is be very surprised if Netflix doesn’t go all in on slop given their recent catalogue
I suspect that if GenAI starts to make content which can grab people's attention, and do it cheaply, then Netflix will become far more accommodating very quickly.
They do not want to be disrupted.
They're already disrupted.
Just look at early 20s people. They don't watch shows/movies. They only watch short form videos. Short form videos will mostly be created using GenAI tools as early as 2026.
Netflix joins everyone else jumping on the "rules for thee, but not for me" train.
Worth reading alongside: Equity’s GDPR FAQ.
https://www.equity.org.uk/advice-and-support/know-your-right...
Common-sense, practical, and covers a lot of the shifting ground around an artist’s ability to withdraw consent under GDPR and the ways they can properly use this to prevent their likenesses being used to train their digital replacements.
(Equity is the UK equivalent of the AEA and SAG-AFTRA combined)
I am thinking of building an association of AI consumers so we can organize to praise or boycott whatever we collectevily find acceptable or not. I'll spend some time reading this in details later on, but whatever it states or imply, positive or negative, it's not for businesses to set the rules as if they owned the place. Consumer associations are powerful and can't be fired when striking, since the customer is always right.
> it's not for businesses to set the rules as if they owned the place.
... Of course it is. As the distributor, Netflix obviously has a fairly broad ability to control what it distributes.
>I am thinking of building an association of AI consumers
The Gooner Association?
> it's not for businesses to set the rules as if they owned the place.
This is for studios and companies that are producing content for Netflix.
If you want to sell to Netflix, you have to play by Netflix's rules.
Netflix has all kinds of rules and guidelines, including which camera bodies and lenses are allowed [1].
[1] https://partnerhelp.netflixstudios.com/hc/en-us/articles/360...
It's interesting that they don't explicitly state the fact that AI-generated content cannot be copyrighted. They seem to dance around that. The provision against generating a major character is about respect for talent and so on, rather than the fact that that would make the major character public domain and therefore able to be used by anyone for anything.
I wonder if we're going to see a push back by media companies around copyright over AI-generated content. Though I don't see how; copyright is explicitly an artificial legal protection of human works.
Netflix is basically strangling the creative potential of GenAI before it can even breathe. Their new “guidelines” read like a corporate legal panic document, not a policy for innovation. Every use case needs escalation, approval, or a lawyer’s blessing. That’s not how creativity works.
The irony is rich they built their empire on disrupting old Hollywood gatekeeping, and now they’re recreating it in AI form. Instead of letting creators experiment freely with these tools, Netflix wants control over every brushstroke of ai creativity
Thankfully GenAI has no creative potential so we aren’t losing much.
I do agree Netflix wants to crush creators.