Le Creuset stated it’s not affiliated with the synthetic intelligence-generated Taylor Swift ad that has popped up on-line showing to advertise a cookware set.
The ad, which started circulating on social media earlier this month, used Swift’s likeness by deepfaking her voice and layering it over digicam photographs of Le Creuset cookware spliced with clips of her talking.
“Hey y’all, it’s Taylor Swift here,” the voice says in a robotic tone. “Due to a packaging error, we can’t sell 3,000 Le Creuset cookware sets, so I’m giving them away to my loyal fans for free.”
It seems to have initially been posted by a Facebook web page titled “The most profitable shares,” and as of Wednesday morning had amassed about 2,300 views. A spokesperson for Meta told NBC News that the ad has been removed.
A spokesperson for Swift did not respond to a request for comment.
In a statement, the cookware company said it has nothing to do with the fake ad.
“Le Creuset is not involved with Taylor Swift for any consumer giveaway,” Le Creuset stated. “All approved Le Creuset giveaways or promotions come from the official Le Creuset social accounts. Consumers should always check Le Creuset’s official social accounts and website before clicking on any suspicious ads.”
The ad is the latest in a string of AI-generated posts attempting to fake a celebrity’s or influencer’s endorsement of a product.
In November, Scarlett Johansson’s legal team demanded the app Lisa AI, which generates stylized avatars based on real photos of people, to stop using an AI-generated version of her in an online ad.
YouTube star MrBeast has also called out a viral deepfake video using his face and voice to advertise an purported iPhone 15 giveaway — just days after Tom Hanks posted his own statement on Instagram to clarify that he had “nothing to do with” a video using his AI-generated likeness to promote a dental plan.
Although deepfake technology has been around for years, rapid advancements in generative AI over the past year — along with its growing mainstream use — have sparked renewed concern over the ease of replicating people’s likenesses without consent.
In August, NBC News viewed more than 50 videos posted to platforms like Facebook, TikTok and YouTube that used digitally manipulated images and audio of public figures. All of those videos seemingly aimed to scam viewers of their money.
The increasing accessibility of this technology has even led experts to speculate on the possibility of a “deepfake election” cycle this year, with political disinformation through AI-manipulated videos likely to run rampant. Others also worry about a growing market for deepfake porn, which thrives off producing fake videos featuring the likenesses of real people without their consent.