Variety

Generative AI Fueling ‘Exponential’ Rise in Celebrity NIL Ripoffs: Exclusive Data

R.Johnson48 min ago
A scourge of unauthorized AI-generated or manipulated content misappropriating celebrity likenesses and well-known IP has been rising since generative AI became accessible in consumer applications and open-source models.

Fake product endorsements, deepfake pornography and voice-enabled chatbots featuring a celebrity's voice and persona — known as name, image, likeness, or NIL — which sources told VIP+ are currently the most common forms of such misuse, all have the potential to harm personal and professional reputations and brand value, cause significant emotional distress and mislead or defraud susceptible consumers. It's worth noting that NIL rights typically cover both facial and voice likeness.

Notorious instances of NIL misuse with generative AI have surfaced in news reports, with famous victims joining a lengthening docket including Taylor Swift, Tom Hanks, Steve Harvey and Selena Gomez. Yet such reported infringements still amount to anecdotal evidence, the tip of the iceberg of a much larger problem.

Sources to VIP+ described that problem growing month over month, noting an "exponential" or "explosive" rise in the number of talent NIL infringements detected online, with the upswing being driven by a wave of widely accessible and easy-to-use generative AI creative tools coming to market.

"It's a problem for all of our clients, and if we have a client who's not experiencing it yet, then it's going to be a problem for them," one talent agency source said to VIP+. "Anybody who has a fandom or a following, it's going to be a problem."

Generative content — defined as any output from a generative AI tool (e.g., Midjourney) or model (e.g., Stable Diffusion) across all AI modalities — now accounts for the vast majority of synthetic media being distributed online, according to the AI authentication solution provider Vermillio.

That's not including so-called "deepfakes," a distinct (but often conflated) category requiring more sophisticated techniques to create content that manipulates an original piece of content with generative AI techniques, such as a video manipulated to overlay or substitute a famous person's face or voice onto another person's face or voice (i.e., faceswap or voiceswap, respectively). The vast majority of talent deepfakes are pornographic, according to Vermillio data shared with VIP+.

By contrast, there is no overlay in generative content, which has vastly outstripped talent deepfakes detected online across large portions of the public internet since such generative AI tools first began to emerge publicly in November 2022.

A meaningful share of this generative content — about 40% in 2023 — already contains either specific talent likeness or IP, according to Vermillio data. That share is expected to increase to about 67% in 2024, per an internal estimate of total content items based on data tracked back through August 2024 and past growth rates.

In the coming years, material containing talent NIL and IP will not only grow significantly, it will account for an increasing share of the total generative content distributed online. Beyond 2026, Vermillio expects that share growth to start to decelerate slightly, as developers begin to use synthetic data to train AI models.

Sources indicated that in the past year voice has become the biggest growth category for detected NIL infringements, such as by exploiting a celebrity voice in a fake ad or interactive chatbots made or used by fans as personal companions.

Infringing content most commonly appears and spreads on social media platforms, primarily including YouTube, TikTok and Instagram, according to Loti, a deepfake detection and takedown service also working with public figures, per its analysis of 30 billion assets between July 1 and September 10. A smaller but fraught remainder appears on porn sites or various blogs on the web.

Both Loti and Vermillio scan substantial portions of the internet, across web and app environments, to detect cases of unauthorized synthetic media misappropriating talent NIL and subsequently issue takedown requests. Yet their reach doesn't extend to the dark web and non-public parts of the web that aren't crawlable, such as personal email accounts and private messaging (e.g., Facebook Messenger, WhatsApp, Telegram), which can be a hotbed particularly for scams.

How such material is disseminated online isn't always confined to a single post on a single platform, posing an even greater challenge to detection and rightful takedown. Sources described synthetic content commonly snaking across platforms, such as with a link embedded in an otherwise benign social media post that points the user out to another platform or site page (e.g., Patreon, product page, adult content site or interactive chatbot app), where they can transact.

That cross-platform "path to purchase" means, even if a social media company can detect AI content on its own platform, it may not always have enough visibility to know it should take down a piece of content that directs a user to harmful content located off-platform.

Last week, California Gov. Gavin Newsom signed new state legislation to protect actor visual or voice likenesses against unauthorized use in new works, specifically assembly bill AB-1836 , which prohibits and penalizes the use of dead performers' digital replicas without permission of their estate; and AB-2602 , which makes any contract unenforceable where a digital replica is used in work the actors could have performed in person, if its use isn't clearly described it the contract and if the performer wasn't represented by a lawyer or labor union upon entering the deal. However, only the bill pertaining to deceased performers provides some recourse for the rise and spread of user-generated unauthorized AI content online.

Ultimately, data protection for talent likeness and IP can be understood as the flip side of the coin whose reverse is monetization through licensing. Data protection strategies will become increasingly important as some talent begins to go down the path of licensing their likeness for fully authorized and authenticated generative AI uses and applications.

Next up on AI from VIP+: • Sept. 25: How talent and Hollywood agencies are confronting the rising problem of celebrity deepfakes

0 Comments
0