Examining the critical difference between human imitation, AI deepfakes, and the legal concept of "substantial similarity" in music copyright.
Table of Contents
The rise of Generative AI has blurred the line between respectful homage and illegal infringement. For decades, the music industry has wrestled with the legal challenge of the **soundalike**—a human vocalist hired to imitate a famous singer’s voice. Today, AI has perfected this imitation, creating a new challenge: the **deepfake**.
Understanding the current legal precedents regarding imitation is crucial, as courts will apply these decades-old standards to determine liability for new AI-generated music.
The distinction between the two forms of imitation lies in the technology and the legal right they violate:
| Imitation Type | Technology Used | Legal Right Violated |
|---|---|---|
| **Soundalike (Human)** | Human vocalist or producer replicating vocal *style* | **Right of Publicity** (Misappropriation of Voice) |
| **Deepfake (AI-Generated)** | Generative AI trained on the actual artist’s vocal recordings | **Right of Publicity** (Digital Replica) and potentially **Copyright** (if training was unauthorized) |
While the goal is the same—to evoke the famous artist—the law treats the direct, computer-generated replica differently from the human mimicry of a style. The **NO FAKES Act** specifically targets the AI deepfake, but the law already had tools for human soundalikes.
In the United States, an artist’s distinctive voice is protected not by federal Copyright Law, but primarily by state-level **Right of Publicity** laws, which prevent the unauthorized commercial use of an individual’s identity. Two landmark cases established this protection:
Bette Midler successfully sued Ford after the company hired a soundalike to imitate her distinctive voice for a Mercury Sable commercial, after Midler herself had refused the offer. The Ninth Circuit Court ruled that **"a voice is legally as distinctive and as personal as a face."** [1]
Tom Waits won a suit against Frito-Lay for a Doritos commercial that used a soundalike to mimic his unique, gravelly vocal style. Waits was awarded $2.475 million in damages. The court emphasized that the imitation was designed to suggest his endorsement, constituting unfair competition and misappropriation of his identity. [1]
The Takeaway: These cases established that even if a human soundalike does not violate copyright (because no specific copyrighted song was copied), they violate the Right of Publicity if the imitation is deliberate, highly recognizable, and used for commercial gain.
When an AI generates a song that is not a vocal deepfake but merely mimics the structural elements of a human artist (e.g., the chord progressions, instrumentation, and rhythmic patterns that define the "Taylor Swift sound"), the legal challenge falls back onto **Copyright Law** and the **Substantial Similarity** test.
To prove copyright infringement of a musical composition, a plaintiff must prove two things: [2]
Courts rely on **musicologists** to dissect the works, filtering out "unprotectable elements" (common chord progressions, simple rhythms, or generic lyrical themes). What remains is the "protected expression," which must be copied to constitute actionable infringement. [3]
Crucially, **musical style** is considered an unprotectable *idea*, not a protectable *expression*, under the Copyright Act. [4]
The distinction between the soundalike and the deepfake has profound implications for how AI is regulated:
| AI Output | Legal Action Required |
|---|---|
| **Deepfake Voice** (Exact replication) | Lawsuit under the new **NO FAKES Act** (Digital Replica) or existing state **Right of Publicity** laws. |
| **Stylealike Song** (New song in the style of Artist X) | Lawsuit under **Copyright Infringement** by proving the AI copied **substantial, protected elements** of an existing song. |
The greatest legal risk for generative AI is not creating a song *in the style of* an artist (which humans do legally all the time), but **impersonating their unique, non-copyrighted identity** (the voice) for commercial purposes without consent.
The legal framework for imitation is clear: an artist's signature voice is protected as part of their identity, whether it's mimicked by a human or cloned by a machine. However, the legal protection for an artist's *style* is narrow, making it difficult to sue an AI for merely mastering a particular genre or aesthetic. Future litigation will focus on whether AI-generated stylealikes cross the line into copying protectable musical expression, setting new precedents for how much originality copyright demands.