Taylor Swift filed three trademark applications with the U.S. Patent and Trademark Office on April 28, 2026 [1, 4], to protect her likeness from AI-generated deepfakes.
This move highlights the escalating struggle between high-profile artists and generative artificial intelligence. As deepfake technology becomes more accessible, public figures face increasing risks of unauthorized voice cloning and image manipulation that can mislead audiences or damage personal brands.
The filings specifically target the misuse of the singer's unique identifiers [1]. According to the applications, the trademarks cover two distinct voice clips [2] and one specific stage image [3]. By registering these assets, Swift aims to create a legal barrier against the growing threat of AI-generated audio and visual content that could misuse her identity [5, 6].
The strategy focuses on securing intellectual property rights over biological and performative traits. While copyright law often protects finished recordings and photographs, trademarks can offer a different layer of protection regarding the commercial use of a brand's identity, including the specific sounds and visuals associated with a performer.
This filing comes as the industry grapples with the rapid proliferation of AI tools capable of mimicking human speech with high precision. The use of the USPTO process indicates a shift toward proactive legal safeguards rather than relying solely on reactive litigation after a deepfake has already circulated.
“Taylor Swift filed three trademark applications... to protect her likeness from AI-generated deepfakes.”
This action signals a shift in how celebrities approach 'right of publicity' in the age of generative AI. By treating voice clips and stage imagery as trademarks, Swift is attempting to establish a proprietary legal claim over her digital identity, potentially setting a precedent for other artists to treat their biological traits as protectable brand assets.





