Richard Batt |
Seedance 2.0 Can Generate Any Character on Video
Tags: AI Tools, Business
In January, ByteDance released Seedance 2.0, their AI video generation tool. Within days, people were using it to create videos of celebrities saying things they never said. Someone generated a video of Tom Cruise acting in a scene he never filmed. Someone else created a video of Dwayne Johnson promoting a product he had never endorsed. The videos were not perfect, they had artefacts and oddities in the eyes, the mouth movements were slightly off, but they were realistic enough to fool most viewers who did not look closely.
Key Takeaways
- This Is Not Just a Hollywood Problem.
- The Three IP Risks Every Business Needs to Understand, apply this before building anything.
- The IP Safety Checklist Every Business Using Generative AI Should Use, apply this before building anything.
- What Your Legal Team Needs to Know.
- The Broader Question, apply this before building anything.
By week two, cease-and-desist letters from Disney, Paramount, and major talent unions began arriving at ByteDance's offices. The letters were unambiguous: you cannot use our copyrighted characters or our clients' likenesses in your AI-generated content.
ByteDance's response was equally swift. They committed to adding IP protection controls: watermarking, detection of copyrighted characters, and screening of generated video for unauthorized use of real people's likenesses.
This confrontation is not unique to ByteDance or Hollywood studios. It is the opening act of an IP crisis that every business using generative AI needs to prepare for.
This Is Not Just a Hollywood Problem
I raise this not because most readers will use Seedance 2.0. I raise it because the underlying legal and business risks from generative AI have moved from theoretical to immediate. And they affect far more than video generation.
A client I worked with in 2025 was using AI image generation (Midjourney) to create marketing assets. They generated product photography, conceptual lifestyle images, and marketing collateral. Then their legal team ran a reverse image search on a sample and discovered something concerning: several of their AI-generated images had similarities to existing copyrighted photographs. The AI had not copied them exactly, but the compositions, poses, and styling were eerily similar to stock photography that we copyrighted.
Was this infringement? The legal answer is genuinely unclear. But it was enough to prompt their legal team to audit every image, remove the questionable ones, and implement new workflows: they now only use AI images they have commissioned specifically, with clear contractual language about IP ownership.
A professional services firm I advised was using AI to write client proposals and thought leadership content. One of their pieces, a white paper on AI strategy, was flagged by a plagiarism checker as being 8% similar to an existing published paper. Investigation revealed that the AI had actually woven phrases and structural patterns from existing papers into its output, including some verbatim sentences in the methodology section. The firm had to retract the white paper and rewrite it. The reputational damage was modest but real.
A tech startup I worked with was using AI voice generation to create tutorial videos. They generated a voice narrator that sounded professional and clear. Months later, they received a letter from a voice actor whose voice they had not explicitly used, but whose voice model had apparently incorporated into the training data for the AI system. The voice actor claimed infringement. The dispute was settled, but not cheaply.
These scenarios are no longer edge cases. They are becoming routine. And they all point to the same underlying issue: generative AI trains on vast datasets that include copyrighted material, and when AI generates output, it can produce content that is similar enough to existing copyrighted work to create genuine legal liability.
The Three IP Risks Every Business Needs to Understand
Risk One: Your AI Output Resembles Copyrighted Work
When an AI model trains on millions of images, videos, or texts, it learns patterns and associations. When you ask it to generate something new, it can produce output that, while original in structure, resembles existing copyrighted work in composition, style, or substance. This is particularly true for visual content (images, video) and is less clear in legal terms than direct copying, but it is real.
The legal risk depends on several factors: how similar is the output? Is it substantially similar, or just similar? Would a reasonable person think they are looking at the same work? Has the copyright holder registered their work? What jurisdiction are you operating in?
These questions do not have clear answers yet. But that uncertainty is itself a risk. You could generate content, publish it, and then face a cease-and-desist demanding you take it down or face litigation.
Risk Two: Your AI Uses Someone's Likeness or Voice Without Permission
This is where Seedance 2.0 ran into trouble. If an AI generates video or audio that uses a real person's likeness, name, or voice without their permission, that person has multiple legal claims: right of publicity (which varies by jurisdiction), defamation (if the content is false and damaging), and potentially harassment or fraud (depending on how the content is used).
The question of whether the AI trained on that person's image or actually synthesised it from scratch is immaterial from a legal perspective. If the output uses or references a real person's likeness, the risk is real.
This affects businesses even if you never intended to create content featuring real people. If someone uses your AI tool to generate content featuring a celebrity, and that content is published or goes viral, you, as the tool provider, could be implicated.
Risk Three: Your AI Was Trained on Data You Did Not Own or Have Permission to Use
Most commercial AI models, GPT-4, Claude, Gemini, we trained on massive datasets that include copyrighted material. OpenAI and other providers argue this training falls under fair use. But there are active lawsuits challenging this premise, brought by authors, artists, and news organisations claiming that their copyrighted work was used without permission to train AI that now competes with them.
As a user of these AI tools, you may not face direct legal liability for how the models were trained. But you do face reputational and ethical risk. And if you are generating content at commercial scale using AI trained on contested data, you inherit some of that uncertainty.
The IP Safety Checklist Every Business Using Generative AI Should Use
If you are using AI to generate content, images, video, text, audio, for commercial purposes, here is a practical checklist I recommend to every client:
For AI-Generated Images and Visual Content:
First, does your AI tool provide clear IP ownership? Can you use the generated images commercially? Get this in writing from your vendor. Second, run your AI-generated images through reverse image search (Google Images, TinEye) to check for suspicious similarities to existing copyrighted work. If you find similarities, do not use that image. Third, if you are using AI to generate images that feature people, ensure those people are not recognisable as real people. Fourth, consider having your legal team review a sample of generated images before you deploy them at scale.
For AI-Generated Text:
First, always use plagiarism detection software on AI-generated content before publishing. Second, review the AI output for any verbatim passages that be too similar to existing work. Third, especially for claims-based content (white papers, research), verify that factual claims are accurate and not just plausible-sounding. Fourth, if the AI cites sources, verify those sources exist and have not been hallucinated.
For AI-Generated Video and Audio:
First, ensure the tool you are using does not allow generation of content featuring real people without explicit safeguards. Second, if you are using AI voice generation, get clear contractual language from the vendor stating that you own the generated voice and can use it commercially. Third, do not use AI to generate video featuring celebrities, public figures, or anyone whose likeness is recognisable, even if the vendor permits it. Fourth, add watermarks or clear disclosure that content is AI-generated, which provides some legal protection and manages user expectations.
For All AI-Generated Content:
First, maintain records of your AI tool's terms of service and IP ownership language. Second, audit a sample of AI-generated content monthly for potential IP issues. Third, have a take-down protocol: if you are notified that AI-generated content violates someone's IP, remove it immediately and investigate. Fourth, consider IP insurance or at minimum consult with counsel on your exposure.
What Your Legal Team Needs to Know
If you have not already, you need to have a conversation with your legal team about generative AI. Most in-house counsel are not yet up to speed on the IP risks. Here is what I recommend you brief them on:
First, the regulatory environment is changing rapidly. The EU's AI Act includes IP protections. Various jurisdictions are considering or passing laws on synthetic media and deepfakes. Your legal exposure today be different in six months.
Second, the case law is sparse but growing. The Author's Guild and other organisations have ongoing lawsuits against AI companies. As those cases resolve, they will set precedent that affects your use of AI tools.
Third, your vendors, the AI tool providers, have shifting IP policies. OpenAI recently updated their IP indemnification for ChatGPT Plus subscribers (they will cover some IP claims, not others). Other providers are doing the same. Make sure you understand what your vendor will and will not cover.
Fourth, the safest approach is to treat AI-generated content as you would treat any vendor-supplied content: assume you need to verify it, review it, and potentially add disclaimers. Do not assume it is safe to use just because a machine generated it.
The Broader Question
The Seedance 2.0 cease-and-desist letters are not really about whether a specific video infringes copyright. They are about the entertainment industry signalling a clear boundary: do not use our IP without permission, even if you can technically do so.
That signal is now broadcast across every industry. Every business using generative AI is receiving the same message: we are watching, and we will enforce our IP rights.
That does not mean you cannot use AI. It means you need to use it thoughtfully, with clear IP safeguards, and with the understanding that the legal market is still settling. The organisations that will navigate this successfully are the ones that treat IP risk seriously from day one.
Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.
Frequently Asked Questions
How long does it take to implement AI automation in a small business?
Most single-process automations take 1-5 days to implement and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.
Do I need technical skills to automate business processes?
Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.
Where should a business start with AI implementation?
Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.
How do I calculate ROI on an AI investment?
Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.
Which AI tools are best for business use in 2026?
It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.
Put This Into Practice
I use versions of these approaches with my clients every week. The full templates, prompts, and implementation guides, covering the edge cases and variations you will hit in practice, are available inside the AI Ops Vault. It is your AI department for $97/month.
Want a personalised implementation plan first? Book your AI Roadmap session and I will map the fastest path from where you are now to working AI automation.