Ever since DeepSeek emerged on the AI scene in January last year AI nationalism has become much more explicit in the US. The emergence of Chinese players as major strategic AI competitors has US Big Tech, Big Content and the White House galvanised in a unified nationalistic front.
The USA vs China AI narrative boiled over again recently with the release of Seedance 2.0, a highly capable updated AI video model put out by ByteDance, the Chinese company behind TikTok. Some attention has been placed on what Seedance can do and how that may present a challenge to the US film industry (and filmmaking in other countries too). Yet, in terms of the response from Big Content players in America, it has been less a discussion of how to improve US AI video models to keep up with Chinese innovation and more an attempt to quash Chinese competition using copyright.
Looking more broadly at AI coming out of China, American AI companies continue to allege Chinese AI companies have been lifting their models. American responses to Chinese AI tend not to consider the advancements they bring, instead casting them as ‘cheap knock-offs’ that stole their competitive advantage by ripping off US AI models. The final piece in this triad is White House AI policy and the advancement of an America first AI dream. The Trump administration is totally behind a vision of AI global dominance with the US as the gold standard.
DeepSeek may have been a Sputnik moment for AI, but the tools in this arms race are quite different.
Copyright, not filmmaking, is behind Hollywood’s attack on Seedance
Two weeks ago two Chinese video generation models were released: Seedance 2.0 and Kling 3.0. Both are reportedly performing well, but Seedance saw the most attention so the focus here is on it. This is probably because Seedance was released by ByteDance, the company behind TikTok. That automatically makes it more newsworthy than an obscure update to model put out by an even more obscure AI company ⟨ had you heard of Kling 3.0 or Kuaishou before now? ⟩
There is an overview of Seedance 2.0 in WTF now?! #22, but to quickly summarise: the updated model put out in limited beta is being praised for its multimodal inputs, audio-video synchronisation and cinema-quality multishot sequences. Combine this with prompting directed by references to input material (think something like ‘Use the characters from @Image1, camera movement from @Video2 and background music from @Audio3’) and what you get is highly consistent, high-quality, cinematic video that is very specific to the user’s prompts.
There are many Seedance-generated clips that did the rounds on social media and news feeds that have provoked the ire of Hollywood, including one of Brad Pitt and Tom Cruise fighting on a rooftop.
This was a 2 line prompt in seedance 2. If the hollywood is cooked guys are right maybe the hollywood is cooked guys are cooked too idk. pic.twitter.com/dNTyLUIwAV
— Ruairi Robinson (@RuairiRobinson) February 11, 2026
Irish filmmaker Ruairí Robinson’s X post about the capabilities of Seedance 2.0.
As I said in WFT now?! you could be excused for thinking you were watching a teaser trailer for Mission Improbable 10. While it is (seemingly) impressive, I am not convinced Seedance actually poses a significant threat to Hollywood filmmaking or mainstream filmmaking per se. Rather, the advances in AI video generation are more threatening to US copyright portfolios and US AI models.
Seedance won’t replace filmmakers
Not everyone is convinced by the flashy demo videos doing the rounds. Undoubtedly there are still some rough edges, but there is no doubting Seedance 2.0 is two big steps forward for AI video generation. As visual effects artist Aron Peterson’s debunking of Seedance notes, the effectiveness of the Brad Pitt and Tom Cruise fight scene is highly dependent on face swap and pre-existing green screen fight choreography. The idea this clip was entirely generated by AI from nothing more than a text prompt is a fallacy.
What it means is that many of the skills and talent of filmmaking are still needed to produce quality video outputs using Seedance. To illustrate, here’s Peterson’s take on including green screen clips in your Seedance setup:
“It’s important to note that not anyone could shoot the green screen video [in the Pitt–Cruise fight scene]. Video to video requires excellent input/source material for best results, as Seedance did. Hiring a green screen studio, stuntmen, choreographer, lighting crew and cameraman would cost a couple of grand a day on the low end. Then there is the cost of generating. We don’t yet know how often Seedance users will have unusable output. The discard rate in generative media tends to be very high. By unusable we mean 'not good enough for the big screen’ where regular errors and artefacts ruin the viewing experience.”
In ByteDance’s defence, they were clear that multimodal input is fundamental to getting quality video outputs out of Seedance. Precise instructions such as shot types, camera movement and sequencing combine with the user’s inputs (up to nine images, three video clips, three audio clips and natural language prompts) through references to generate a specific clip. Then to use Seedance well requires a level of filmmaking knowledge, skills and talent that the average person does not have.
As Deadpool & Wolverine co-writer Rhett Reese put it:
“In next to no time, one person is going to be able to sit at a computer and create a movie indistinguishable from what Hollywood now releases. True, if that person is no good, it will suck. But if that person possesses Christopher Nolan’s talent and taste (and someone like that will rapidly come along), it will be tremendous.”
In next to no time, one person is going to be able to sit at a computer and create a movie indistinguishable from what Hollywood now releases. True, if that person is no good, it will suck. But if that person possesses Christopher Nolan’s talent and taste (and someone like that… https://t.co/hqHUgRk8N4
— Rhett Reese (@RhettReese) February 11, 2026
Rhett Reese’s X post about the threat Seedance 2.0 poses to Hollywood.
It is precisely for that reason that I think Seedance and similar technologies will become an important add-on to filmmaking. But I don’t think it will supplant human creativity.
Even so, there is a nauseating irony in the claims that Seedance will do away with the need for human creative labour in the film and television industry. Players are happy to fire that shot at Seedance but seemingly turn a blind eye to the fact that it is exactly that promise – replacing human labour – that is at the heart of marketing AI to business leaders across all kinds of industries. The sales pitch goes something like this: ‘think of all the wages you can save if you replace your humans doing administration/analysis/coding/customer service/sales/pretty much any other role with AI.’ Yet, even as that proposition is mulled over by senior leadership, there seems to be a lack of critical reflection on the fact that AI is known to be substandard – producing nonfacts and other hallucinations requiring human checking – yet is being sold to enterprises at a premium price.
Release the cease and desists!
As mentioned, Hollywood’s reaction to Seedance is not about its video generation capabilities. Rather their wrath comes because of the ease at which videos using their characters and other copyright and IP can be generated by users. In addition to the Brad Pitt–Tom Cruise fight scene there were plenty of other clips circulating using Spider-Man, Darth Vader and other instantly recognisable characters. But arguably that is exactly what Seedance wanted. Why? Well, as Peterson put it:
“Physical talent and celebrity matters to fans and followers. That’s why the Seedance fight demo [with Brad Pitt and Tom Cruise’s likeness] had to use the faces of two celebrities for wide reach and hype. Without that it would have just been two stuntmen and an AI filter. Nothing to talk about.”
ByteDance knew there would be much more hype if Hollywood properties were shown in clips. They took a calculated copyright risk for a marketing payoff. Of course, it did not take long for the Big Content studios to send cease and desist letters on the basis that users have been generating videos indiscriminately using Hollywood actors’ likenesses and environs that look very similar to Hollywood titles.
Disney hit first, claiming ByteDance handed Seedance “a pirated library of Disney's copyrighted characters from Star Wars, Marvel, and other Disney franchises, as if Disney's coveted intellectual property were free public domain clip art.” Then came Paramount Skydance who said, “much of the content that the Seed Platforms produce contains vivid depictions of Paramount’s famous and iconic franchises and characters, which are protected under copyright law, trademark law, and the law of unfair competition (among other doctrines).” Warner Bros. Discovery got in there as well, saying that, “ByteDance is infringing Warner Bros. Discovery’s copyrights in plain sight, in an apparent attempt to promote and establish consumer demand for Seedance.”
The film industry bodies were also quick to come out against Seedance. Motion Picture Association (MPA) CEO and Chairman Charles Rivkin issued a statement saying:
“In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorized use of U.S. copyrighted works on a massive scale. By launching a service that operates without meaningful safeguards against infringement, ByteDance is disregarding well-established copyright law that protects the rights of creators and underpins millions of American jobs. ByteDance should immediately cease its infringing activity.”
SAG-AFTRA said:
“SAG-AFTRA stands with the studios in condemning the blatant infringement enabled by Bytedance's new AI video model Seedance 2.0. The infringement includes the unauthorized use of our members' voices and likenesses. This is unacceptable and undercuts the ability of human talent to earn a livelihood. Seedance 2.0 disregards law, ethics, industry standards and basic principles of consent. Responsible AI development demands responsibility, and that is nonexistent here.”
And the Human Artistry Campaign said:
“The launch of Seedance 2.0 is an attack on every creator around the world. Stealing human creators’ work in an attempt to replace them with AI-generated slop is destructive to our culture: stealing isn’t innovation …”
Knowing Hollywood’s rapid fire copyright criticisms would be coming, ByteDance quickly responded with a commitment to “[take] steps to strengthen current safeguards as we work to prevent the unauthorised use of intellectual property and likeness by users.”
It is worth unpacking the swift and aggressive response from Hollywood to ByteDance’s Seedance 2.0 AI video model. Disney, Paramount and Warner Bros. have already sent cease and desists to ByteDance and their industry bodies have pooh-poohed the technology. All to be expected.
The Big Content players with their vast copyright portfolios gathered over successive acquisitions and mergers have been penning deals almost weekly with Big Tech to allow their IP to be intermingled in AI outputs: Disney–OpenAI, Universal Music–Udio, News Corp–OpenAI are just a few of these deals. Many of these arrangements have two things in common: the terms of the agreement are not publicly known and they are made between American Big Content companies and American Big Tech companies.
As a Chinese competitor in the AI market, it is unlikely we will see a similar deals put on the table for ByteDance after the initial cease and desists dust settles. It is more likely the keys to that bountiful IP chest will be reserved for fellow Americans. Even if that prediction is wrong and Hollywood grants access to its IP to Seedance, how might the Trump administration respond given The White House’s view that AI is a race for global dominance that America must win. Will ByteDance be forced to set up a US-based joint venture for Seedance similar to the arrangement that resulted in TikTok US?
Don’t forget the reactions to DeepSeek
While Seedance is a current reminder of the AI rivalry between the US and China, the national race goes back. It came to a head when DeepSeek-R1 was released on Monday 20 January last year. Almost immediately, OpenAI and Microsoft claimed the Chinese model was trained using distillation – where a smaller model leapfrogs to significant model performance by learning quickly and efficiently from a larger, more capable model. OpenAI recently used a letter to the US House Select Committee on the Chinese Communist Party to reiterate its concerns that DeepSeek is engaging in “ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs.”
There is general agreement in the industry that distillation can be done legitimately. To illustrate, many of the AI companies have smaller, cheaper models that were created by distilling their own frontier models. There are also official distillation tools such as a model distillation pipeline within the OpenAI platform or Amazon Bedrock Model Distillation. These uses are in contrast to what is identified broadly as illegitimate distillation: competitive extraction where a competing AI company uses significant fraudulent accounts to engage millions of queries to a frontier model to quickly and cheaply benefit from its reasoning and capabilities. Such actions are in violation of the Terms of Service for the frontier model.
Google have also raised concerns recently about a rise in distillation attacks on their models, but did not identify regions or AI companies (although they did identify other nefarious uses of AI by “threat actors” coming from “the Democratic People's Republic of Korea (DPRK), Iran, the People's Republic of China (PRC), and Russia …”) Even though Google did not name Chinese AI companies as a perpetrator of distillation, they bemoaned alleged cloners ambushing Gemini in an attempt to extract the underlying model. Google is not shy about declaring such ‘distillation attacks’ as being “a method of intellectual property theft that violates Google’s terms of service.”
Anthropic on the other hand was seemingly not concerned about pointing fingers. More recently, Anthropic publicly named DeepSeek and fellow Chinese AI companies Moonshot and MiniMax as being behind “industrial-scale campaigns … to illicitly extract Claude’s capabilities to improve their own models.” In that announcement, Anthropic warns that safeguards are likely stripped in illicitly distilled models and cautions that such distillation attacks should not be viewed as evidence that export controls on compute hardware doesn’t work. ⟨ While I agree, saying “the threat extends beyond any single company or region” feels pretty disingenuous given how dominant the US is in the AI industry. ⟩ In response, Anthropic calls for “rapid, coordinated action among industry players, policymakers, and the global AI community.”
While the risks associated with illegitimate distillation are palpable, it doesn’t mean we shouldn’t be looking critically at the arrangements surrounding legitimate distillation. By locating ‘good’ distillation within the proprietary ecosystems of US AI companies we are effectively allowing them to gatekeep, cement their first-mover advantage, further centralise power, and give them a window through which to surveil the global AI landscape. The rhetoric seems to go that if you want to peer into the AI black boxes you need to ask America for permission. Tying ‘bad’ distillation to claims of stripped safety measures and national security threats (however legitimate those claims are also) becomes a pretext for anticompetitive gatekeeping that allows a US centricity to perpetuate if left unchecked.
Also worth thinking about is the normalisation of the idea that American AI companies should have free reign to everyone’s copyright on the free and open internet to train their AI models and the assertion that those models are IP. ⟨ To be clear, the underlying code is likely protected under copyright, but other components of how a model works such as the weights (numerical parameters) of a trained model are less clear. Perhaps the assumption is they are trade secrets, which can be difficult to enforce. But I digress … ⟩
American Big Tech have relied on (and defend that reliance on) fair use to get around copyright infringement as their scraper bots continue to troll the free and open web harvesting any content they can. In complete juxtaposition to their approach to training data, those same AI companies are taking a hard-nosed copyright maximalist approach to asserting rights in their systems.
Everything is golden for US AI in Trump’s White House
Putting the dumping of Anthropic by the US Department of War (DoW) to one side, largely US AI policy is about ensuring American dominance of the AI market. Whether the January 2025 Executive Order Removing Barriers to American Leadership in Artificial Intelligence or the pillars of the Winning the AI Race: America’s AI Action Plan, Trump is unabashed in thinking America should “lead the world in AI”.
The launch of DeepSeek was a “wake-up call” for the US AI industry according to the Trump administration. What followed was a knee-jerk banning of DeepSeek by US government agencies including Congress, the US Navy, The Pentagon and NASA. Trump also instructed the National Security Council (NSC) to investigate the potential national security implications – the same reasoning behind the TikTok ban.
The methodology seems to fit with Trump’s AI Action Plan. Across three pillars – accelerating AI innovation, building American AI infrastructure and leading international AI diplomacy and security – Trump wants to “... establish American AI—from our advanced semiconductors to our models to our applications—as the gold standard for AI worldwide and ensure our allies are building on American technology.” Across the ambitious plan the many policy actions are heavily orientated around America ‘winning’ the AI race.
Concluding comments
Whether it is Big Content, Big Tech or the White House, what is being espoused in the US is an AI nationalism that rightly or wrongly views America as the global leader in AI. In all three contexts, AI competition is framed through a national security, national identity and economic protectionist lens. This is playing out in response to Seedance as a battle over iconic IP rather than a constructive and reflective look at innovation in AI as a filmmaking tool. Meanwhile, US AI developers stigmatise illegitimate distillation on safety and legal grounds while quietly not talking about how that also positions them as gatekeepers over AI innovation. And sitting atop both is the Trump administration’s geopolitical America first AI strategy that conveniently sidelines Chinese competitors. All these responses suggest that the primary concern for America is not just the quality of Chinese AI (whether they admit it or not), but also the potential loss of control over the global AI market and the IP that fuels it. As international competition intensifies, the line between protecting innovation and anticompetitive gatekeeping will only get more blurry.
Colophon
Reuse
AI use
No part of the text of this blog post was generated using AI. The original text was not modified or improved using AI.
AI was used to generate ideas and interrogate the subject matter of this blog post, but no AI-generated content was used verbatim.
The banner graphic (i.e. the first image at the top of the blog post) was adapted from vector graphics generated in Adobe Illustrator using Firefly 4 with 'Subject' content type selected and the lowest level of detail set. { Text to Vector Graphic prompt: An outline of an eagle, simple shapes, 80s retro style. }
Provenance
This blog post was first published on Friday 6 March 2025. It has not been updated. This is version 1.0.

