Marilena Kanatá
Senior IP & Media Counsel
Fever
Madrid, SpainThis article explores the legal and contractual uncertainties arising from the use of generative artificial intelligence (AI) to create digital replicas in the entertainment industry. As synthetic performances become more prevalent—ranging from de-aged actors to AI-cloned voices—the absence of a clear regulatory framework raises complex questions about ownership, consent, and performer rights. The article examines key international and domestic frameworks, including neighboring rights under WIPO treaties, data protection law, and publicity/image rights across jurisdictions. It also analyzes recent developments in collective bargaining agreements, such as those led by SAG-AFTRA and Equity, and proposes practical guidance for structuring private agreements to manage risk, clarify consent, and define the scope of AI-related uses while the legal landscape continues to evolve.
Generative artificial intelligence (Generative AI) is reshaping the entertainment industry, not only in how content is consumed, but also in how it is created and distributed. AI-generated scripts, music, and video—now increasingly integrated across film and music production—offer cost-effective and appealing opportunities for studios, streaming platforms, and content creators. According to a 2023 global study by Lucidworks, 96 percent of AI decision-makers in the entertainment and media industry planned to increase investment in generative AI technologies.¹
In film, generative AI has enabled the creation of entirely synthetic actors through techniques such as de-aging, allowing performers to portray younger versions of themselves or even recreate deceased performers. Recent examples include the film Here (2024), which features a digitally de-aged version of Tom Hanks² and the Netflix documentary The Andy Warhol Diaries (2022), where Andy Warhol’s voice was synthetically reproduced.³ More controversially, studios beginning to us AI to populate background environments: in Prom Pact (2023), for instance, Disney+ reportedly employed AI-generated extras to fill crowd scenes.⁴
Similarly, the music industry has seen a rise in AI-generated content, with AI-cloned voices replicating artists’ vocal styles. Some AI-generated songs closely mimic real musicians, making it difficult even for experienced listeners to distinguish synthetic performances from real ones. A notable example is country singer Randy Travis, who, after losing his ability to sing due to a stroke, released his first new song in over a decade using a voice model trained on archival recordings.⁵
These developments have accelerated the use of so called digital replicas: video, image, or audio recordings digitally created or manipulated to depict an individual realistically—but falsely. A “digital replica” may be authorized or unauthorized and can be produced by any type of digital technology, not just AI.⁶ However, their prevalence has grown rapidly due to AI’s ability to generate them at scale and with minimal effort.
Although digital replicas and AI-generated performances present new opportunities, they also generate legal and contractual ambiguity—particularly in the absence of a clear regulatory landscape. While these technologies raise wider concerns in contexts such as misinformation and non-consensual pornography, this article focuses on their use in entertainment. Specifically, it addresses the implications for performers’ rights, the boundaries of consent, the ownership and exploitation of AI-generated performances, and the growing role of private agreements in managing these risks.
Two of the most pressing legal questions are: (1) if a digital replica delivers a new performance—whether in a film, advertisement, or musical recording—who should be considered its rightful owner? and (2) what are the boundaries of consent when replicating a person’s likeness, image or voice?
As for the first question, performers do not hold copyright over their performances, but are instead granted neighboring rights, which protect the fixation and commercial use of their recorded performances. International treaties such as the Rome Convention (1961) and the WIPO Performances and Phonograms Treaty (WPPT, 1996)⁷ grant performers certain exclusive rights over their recorded performances, ensuring they receive compensation for commercial uses. These treaties define performers broadly—as actors, singers, musicians, dancers, and others who perform literary or artistic works. They also grant moral rights, including attribution and the right to object to distortions that may harm the performer’s reputation. More recently, the Beijing Treaty on Audiovisual Performances (2012) reinforced protections for actors in film and audiovisual productions, strengthening their ability to control the use of their recorded performances.⁸
However, these instruments were developed long before the emergence of generative AI and do not account for the specific challenges it presents. Unlike traditional computer-generated imagery (CGI), where a human actor provides the base performance and may contribute creatively to the final result⁹ —AI-generated performances can be produced entirely without the performer’s direct involvement. While rights in the final audiovisual or phonogram product generally rest with the producer, performers retain neighboring rights over their own recorded performances, protecting their creative input. The simulation of that input through synthetic media may raise legal issues, including whether such outputs could be subject to new rights or entitlements—and if so, who might hold them.
A further layer of complexity may arise when synthetic performances are generated by AI systems trained on pre-existing materials—particularly when these include recordings of the very artists being digitally replicated, often without their authorization in jurisdictions where such use is not permitted. In these cases, performers may lack both control and compensation over the resulting AI-generated output and may have neither consented to nor been remunerated for the use of their prior performances as training data. Equity—the UK performers’ union—has warned that this practice often relies on pre-existing performances without proper licensing. Even when training uses lawfully acquired material, the union argues that performers are frequently excluded from licensing deals and receive no share in the commercial value of synthetic outputs based on their identity or style.¹⁰
Beyond questions of ownership, the issue of consent plays an equally central role in determining how synthetic performances may be lawfully used and controlled. When such consent is clear and informed, the legal landscape is relatively straightforward. Challenges arise when such performances are created without the performer’s explicit consent, as happened with the AI-generated song mimicking Drake and The Weeknd,¹¹ or when the scope of that consent is vague or overly broad. In 2023, for example, Tom Hanks publicly denounced the unauthorized use of an AI-generated version of his likeness in a commercial advertisement for a dental plan.¹² This and similar cases highlight the uncertainty surrounding the scope and duration of consent. Does it extend beyond the original context or project? Can a digital replica be reused across unrelated projects or indefinitely? And who controls such uses after the performer’s death?
To better understand the above, it is helpful to consider the broader set of rights potentially affected when an individual’s likeness, voice, or biometric data is used to create AI-generated replicas without consent or under unclear consent conditions.
Among these, privacy rights may come into play when synthetic media depicts individuals in ways that may compromise their dignity, autonomy, or personal integrity—a concern recognized in many legal systems, albeit through differing legal frameworks. In the United
States, privacy is not protected by a single, comprehensive statute but rather through a patchwork of state-level torts—such as false light or appropriation of name and likeness—which typically apply only to living individuals, and often require that the use be commercial in nature.¹³ Some states further limit protection to well known persons.¹⁴
In contrast, privacy is considered a fundamental right under the European Union’s General Data Protection Regulation (GDPR),¹⁵ which offers protection for personal data, including biometric identifiers like facial scans and voice imprints. The unauthorized processing of such data—including its use in training AI models or generating digital replicas—may constitute a violation of data protection law, regardless of whether the use is commercial or whether the individual is a public figure.
In addition to privacy and data protection, publicity rights may be affected—particularly in the U.S., where they are recognized through state statutes or common law in most jurisdictions. These rights protect individuals from unauthorized commercial exploitation of their name, image, or likeness. However, their scope is fragmented: some states limit protection to public figures, others extend it post-mortem, while many do not recognize the right at all.¹⁶
By contrast, many civil law jurisdictions do not recognize a stand-alone right of publicity but rely instead on image or personality rights, which are often integrated into wider concepts of privacy or moral rights. In Spain, for example, Organic Law 1/1982¹⁷ prohibits the unauthorized commercial use of a person’s voice, image, or likeness—particularly for advertising or promotional purposes. In this context, consent must be specific, express, and limited to authorized use, meaning that approval for one context (e.g., a film) does not automatically extend to others (e.g., advertising or AI training).
Notably, copyright law does not protect an individual’s identity—even when incorporated into a copyrighted work, therefore a replica of a person’s voice or image alone does not constitute copyright infringement. Only if the person owns the copyrighted source materials
used to train the AI model might they assert a copyright claim. This means that performers have no automatic remedy under copyright law for unauthorized AI-generated simulations of their likeness or voice.¹⁸
Unfair competition law may also apply when digital replicas can deceive consumers or exploit a performer’s identity in misleading ways. In comments submitted to the U.S. Copyright Office’s 2023 Notice of Inquiry on Artificial Intelligence and Copyright, the Federal Trade Commission (FTC) emphasized that unauthorized digital replicas can constitute deceptive or unfair practices—particularly when they mislead consumers, harm a performer’s reputation, or cause market confusion.¹⁹ Also, the FTC is exploring new regulatory measures, including a proposed rule to prohibit impersonation of individuals, businesses, and government entities through digital content, including AI-generated voice clones and synthetic media.²⁰
This legal fragmentation creates uncertainty not only for performers seeking to safeguard their digital likeness, but also for producers aiming to ensure that their intended uses are properly authorized and legally compliant across jurisdictions. In the absence of harmonized statutory rules, private agreements are becoming an essential legal tool to document informed consent and define the scope of permitted uses, licensing terms, and the allocation of economic rights.
In December 2023, SAG-AFTRA signed a landmark collective bargaining agreement with the Alliance of Motion Picture and Television Producers (AMPTP), establishing, among others, safeguards for the use of AI-generated digital replicas in film and television. The contract distinguishes between “employment-based replicas” (created with the performer’s participation) and “independently-created replicas” (created without it). For both, the agreement requires clear and conspicuous written consent, tied to specific uses, and guarantees compensation.²¹
Building on this model, SAG-AFTRA has extended AI protections into other sectors. In January 2024, it reached an agreement with Replica Studios covering AI-generated voice replicas in video games and interactive media. The deal mandates informed consent, sets minimum terms, and gives performers the right to opt out of future uses.²² In April 2024, the union concluded a tentative agreement with major record labels—including Warner Music Group and Universal Music Group—focused on synthetic voices in music. It limits the definition of “artist” to human performers, requires clear and conspicuous consent, and guarantees remuneration for commercial uses.²³
Equity, the UK performers’ union, has emphasized that most existing contracts were not designed to address AI training or digital replication. The union cautions that broad or generic waivers should not be interpreted as valid consent for such uses—particularly when these uses were not foreseeable at the time the agreements were signed. The union also calls for updated contract models to ensure explicit, informed consent and appropriate remuneration in line with performers’ intellectual property and data protection rights.²⁴
These developments underscore the growing importance of clear and well-structured contractual provisions. To provide legal certainty for all parties involved, agreements involving AI-generated performances should address several key areas. These are:
As the legal landscape around digital replicas continues to evolve, the enforceability of such provisions will depend on the regulatory framework—especially in jurisdictions where certain rights, such as moral or personality rights, are non-waivable. In this context, until a clearer legislative response emerges, carefully negotiated agreements remain the most effective safeguard to balance innovation with the rights of performers.■