The Feed Is Fake
That “viral” song, movie, meme, influencer, and celebrity drama was probably the product of a stealth marketing campaign.
By
,
a features writer for New York Magazine.
He was previously the magazine’s culture editor and one of the original editors of Vulture.
Illustration: Rob Vargas
This article was featured in New York’s One Great Story newsletter. Sign up here.
Joe Lim estimates that 90 percent of what you see on the internet is advertising in disguise, and he should know. For three years, Lim ran a company called Floodify, which at its peak operated 65,000 dummy social-media accounts used to drum up attention on behalf of paying clients. On a typical day, he says, Floodify posted 50,000 videos across TikTok, Instagram, YouTube, and X, all of them designed to pass for the unscripted output of ordinary users.
“We promoted music for all the major record labels,” says Lim, 29, who lives in San Francisco. “We worked with a top-five celebrity I can’t name. We got 40 million views for an artist with just a hundred thousand followers.” Floodify’s services were in demand in politics, too. “When Eric Adams was running for reelection, his team asked me to do a campaign with videos of AI-generated influencers shitting on Mamdani: ‘This grocery-store idea is bullshit.’” Lim says he turned down the Adams job not out of principle but because a consultant working with the campaign stopped replying to his emails. (Eric Adams’s former chief of staff Frank Carone tells me, “I have no knowledge about this, but I would have encouraged it.”)
The point of this kind of marketing is that nobody is supposed to notice it. But lately, the machinery has started to show. In March, Jesse Coren and Andrew Spelman, co-founders of the digital music-promotion agency Chaotic Good Projects, gave a live interview to a Billboard reporter at South by Southwest in which they breezily described using sock-puppet accounts to manufacture enthusiasm for artists at every level of the music industry, from major-label pop stars to niche indie acts. Spelman called the practice “trend simulation.” His motto: “Everything on the internet is fake.”
Chaotic Good’s interview went viral the old-fashioned way: by making lots of real people mad. Some were appalled by the cynicism of the company’s pitch, others by its client list, which included indie artists whose popularity fans preferred to imagine had spread organically. Most of the outrage focused on the Brooklyn band Geese and its frontman Cameron Winter, whose strangled, water-buffalo caterwaulings became inescapable in 2025. To skeptics, Chaotic Good seemed to provide the missing explanation for the group’s unexpected ubiquity. Wired called Geese’s success “a psyop,” which triggered Paste to defend the band in a piece headlined, “Congratulations, You Discovered Digital Marketing.” Then, with timing that did not discourage further conspiratorial thinking, TMZ published photos of Winter leaving a restaurant with Olivia Rodrigo, and the subject mostly changed.
But the fight over Geese missed the larger point. The issue wasn’t really whether one rock band had been fraudulently foisted on unsuspecting listeners. It was that the same techniques that Coren and Spelman bragged about onstage are now being used to fool people on every app they go to in order to find out what other people think, not just in music but across entertainment, politics, consumer products, and celebrity gossip. Shady marketing and propaganda aren’t new, of course, but what is new is that the entire infrastructure of public conversation has been quietly captured by both. On social media, popular opinion is being formed, measured, and manipulated all at once, and every signal the platforms produce — a trending song, a backlash, a talking point, the feeling that “everybody” is suddenly talking about the same thing — can now be fabricated by unseen actors with hidden agendas. We’ve locked ourselves in the stupidest possible version of Plato’s cave, where what looks like the spontaneous consensus of the hive mind is often just shadows on the wall, put there by marketers, political operatives, foreign-influence campaigns, or anyone else with a few hundred bucks and something to sell. “Everybody is doing this now,” Lim says. “And if you’re not, you’re behind.”
Announcements for clipping campaigns posted to Discord servers.
The primary tactic used by companies like Chaotic Good and Floodify (and many, many others) is known as clipping. A record label — or a movie studio, celebrity talent agency, political campaign, or just some bozo with a video podcast — hires a company to turn a song, trailer, interview, stump speech, or whatever into short, social-media-friendly fragments, either by cutting the clips in-house or by farming them out to a network of freelance clippers. Those clips are then posted by normal-looking accounts: a meme page might serve up a quote about relationships with a new pop song playing behind it; a fan page for a horror movie might cut the scariest 20 seconds from the trailer into a loop and post it twice a day; another account might chop the most entertaining exchange from a three-hour podcast and rebroadcast it to people who would never sit through the entire episode. If enough of these clips rack up enough views fast enough, credulous social-media algorithms interpret the spike as an authentic surge of interest and push the videos to real users, who sometimes generate real engagement, prompting the algorithm to push those videos even further.
Clipping’s origins go back at least to 2022, when the influencer Andrew Tate deployed members of his fan club to post clips of his podcast on social media, causing so many people to wonder who he was and why he was clogging up their feeds that he briefly became one of the most Googled people on earth. Since then, and especially over the past year, clipping has gone professional. Dozens of agencies now offer the service to paying customers. Many operate out of public view, inside members-only communities — which I found were not so hard to join — on platforms like Discord and Whop, where they recruit regular people to do the posting. Each community functions as its own marketplace. An agency announces a new campaign, specifying where the clips should run (usually TikTok, Instagram, and YouTube) and what they pay (usually $1 or $2 per thousand views). Members then have a few days to make and upload as many clips as they can, hoping at least one will go viral. A clipper who posts a single video might earn nothing if it flops or thousands if it hits. The founder of one agency tells me his top clipper has earned over six figures publishing across thousands of accounts.
In a couple of weeks of lurking in these clipping communities, I saw campaigns scroll past for Bad Bunny, Zayn Malik, Fleetwood Mac, Shania Twain, Luke Combs, Noah Kahan, Teyana Taylor, Teddy Swims, Dominic Fike, Kane Brown, Netflix’s The Night Agent, Apple TV+’s For All Mankind, the horror movie Insidious 6: Out of the Further, the Michael Jackson biopic Michael, the betting platform Kalshi, and the Met Gala, among others. This doesn’t necessarily mean the campaigns were paid for by anyone directly associated with those people, movies, TV shows, apps, or events. In some cases, the clipping agencies might have launched them on their own to lure prospective clients or astroturf themselves. But it’s hard to know for sure since none of the representatives for the people or things listed above responded to my calls or emails asking for clarification.
And then there was Justin Bieber. In April, Bieber — who is among the most-streamed artists in pop history and has 287 million Instagram followers — headlined two consecutive weekends at Coachella, playing before massive festival crowds and millions more watching on YouTube. Coachella is the biggest stage in pop music save only for the Super Bowl, the kind of event that in theory generates its own attention. And yet on both weekends, a Discord server I’d been monitoring hosted paid campaigns for Bieber’s Coachella performances, offering clippers as much as a dollar per thousand views. The announcement for one campaign read, in all caps, “THIS IS SO VIRAL GO GO GO GO.” (Bieber was also listed as a client on Chaotic Good Projects’ website before his name, along with the rest of the company’s roster, was deleted.)
Why would someone as famous as Justin Bieber need clipping? The people I asked seemed touched by my naïveté. “Anyone who wants to go megaviral now, they need to pour fuel on the fire,” says Lim, who admits he has no specific knowledge of the campaign but knows there is so much spam and pretend hype on the internet that nothing cuts through without artificial help anymore, not even huge artists with real audiences. Keith Presley, the co-founder of Gudea, a behavioral-intelligence firm that uses AI to track the sources of viral phenomena, put it more broadly: “I don’t know if we’ve found a true viral trend in a while. All of them are going to have some sort of inauthentic behavior behind them.”
Whoever paid for the Bieber clipping campaigns — his reps did not respond to multiple calls and emails — seems to have gotten their money’s worth. In the days after the first Coachella set, a video of Bieber performing “Daisies” became the most-watched clip from this year’s festival on Coachella’s official YouTube channel, racking up more than 21 million views, twice as many as any other 2026 video. Bieber’s catalogue drew 664 million streams globally in the week ending April 16, a 171 percent increase over the previous week. “Beauty and a Beat,” his 2012 collaboration with Nicki Minaj, debuted on the Billboard “Global 200” at No. 4 and ascended to No. 1 two weeks later, only the second non-holiday song to top the chart more than a decade after its release.
How much of that lift came from the Coachella sets themselves, and how much came from the thousands of paid clips amplifying those sets, is hard to say. But the blurriness is the whole point. The artist gets a bump, the bump can’t be definitively attributed to the campaign that paid for it, and nobody can say for sure what’s organic and what isn’t. Until recently, an artist looking to juice his streaming numbers might have paid third-party services to send bots to Spotify. An executive at a Hollywood talent agency tells me that fake streams are a “loss leader” for the music industry, a fixed cost that produces a fixed number of fake-looking plays, “and that’s never going to incite a wildfire moment.” Clipping is different. It doesn’t fake the stream itself; it fakes the appearance of excitement that causes real people to stream. “You might incept an actual trend — you have a chance for a 100-times return on your ad spend,” the executive says.
That’s not to say that clipping is a magic bullet or that any artist can become Bieber or Geese if they spend enough money. If real users don’t watch or share the clips, a campaign fizzles. So in that sense, a lot of what clipping does is help good artists find the audiences who would’ve liked them anyway by accelerating the early excitement just enough to push them past the algorithmic threshold that decides who gets discovered and who doesn’t. But the problem is that everybody has figured this out now, so the threshold keeps moving. Dan Brahmy, the CEO of the bot-detection firm Cyabra, compares this to a professional soccer league in which every club has secretly bribed a referee. “If you have one team that doesn’t have enough money, or just isn’t aware that you can bribe a ref to always win the quarterfinals,” he says, “that team is out of the system.”
Manipulating algorithms is only part of the goal. The other is fooling humans, particularly the dwindling number of journalists, critics, and other gatekeepers who are still capable of conferring legitimacy by paying attention. Livestreamers were among the first to discover that clipping could make them seem more significant than their real statistics would suggest. Two of the most successful are the Groyper-provocateur Nick Fuentes, who’s been banned by most major platforms but remains artificially overrepresented on TikTok thanks to his clips, and Clavicular, the looksmaxxer who was recently charged with a misdemeanor for shooting an alligator on one of his streams and who credits his golden-ratio handsomeness to smashing himself in the face with a hammer. The New York Times recently profiled both of them as figures of great importance — which they are now in the sense that profiles in the New York Times can occasionally make people seem important — even though the live shows that are ostensibly their flagship product usually draw concurrent audiences in the low-to-mid-five figures, less than a fading cable-news show does during a slow hour. Reporters and editors who get their ideas from their social-media feeds — which is most of them, most of the time — can mistake a paid simulation of public interest for the real thing and then make it real by covering it.
The screenwriter William Goldman once famously wrote that “nobody knows anything,” by which he meant that no one in Hollywood has any idea in advance which movies will turn into hits. That line has become true in ways Goldman could never have imagined when he wrote it, in 1983, when in retrospect we actually knew quite a bit. Back then, at least we knew, after the fact, which culture products had found an audience because we still had trusted metrics for cultural success that were tracked and audited and reported by people whose jobs depended on getting them right.
But none of that applies anymore. Most culture is streamed inside apps that lock their consumption data in a black box, report whichever proprietary metric flatters them most, and refuse to be audited or to convert their stats into anything that can be compared with those of any competitor. Even the artists whose work all this machinery is supposedly serving no longer have a reliable way to know what real audiences actually want, since whatever feedback reaches them may already have passed through the same apparatus built to distort that feedback in the first place. In that vacuum, fakery thrives. When nobody knows what’s actually popular, the appearance of popularity matters more than popularity itself.
The reason all of this is happening, probably more than any other, is that clipping is cheap. And not just cheap — cheaper than almost any form of advertising that has ever existed. A typical clipping campaign costs clients roughly a dollar per thousand views, what marketers call a $1 CPM. By comparison, a billboard might cost $10 per thousand estimated passersby; a TV spot can cost $30 or more per thousand viewers; a magazine ad can run even higher. An officially purchased TikTok ad, the kind labeled “Sponsored,” can cost ten times what a clipping campaign does, with the added disadvantage that its viewers will know they’re watching an ad. The math of clipping is so favorable to clients that in many cases campaigns end up giving away views for free. Clipping agencies typically don’t charge extra if a campaign’s view count exceeds whatever the client originally paid for; once the budget is met, the meter stops, but the clips that have been posted keep circulating. Khrish Kewalramani, the co-founder of the clipping agency Spade Clipping, told me one of his recent campaigns cost the client less than $10,000 and resulted in nearly 100 million views. “Why is anyone spending money on a billboard,” he asked me, “when I can get your brand in front of people for a fraction of the cost?”
To be fair, social-media platforms brought this on themselves. The great pivot to video of the past decade was sold to the world as a simple accommodation to user behavior: People didn’t want to read anymore; they wanted to watch. But that was only partially true. Platforms wanted video even more because they could charge more for video ads than they could for the banner ads that used to fund beautiful websites like the one you’re reading right now. So those platforms repaved most of the internet into surfaces that could host video ads, then incentivized users and publishers to roll their cameras. The pivot worked. Meta’s revenue has grown more than tenfold since the mid-2010s, and TikTok’s global revenue is expected to top $30 billion this year. But the same shift that made these platforms rich also created a monster that they couldn’t control. Our feeds now require an almost-infinite supply of short-form video, and clipping helps provide it, but it presents a moderation problem with no good solution. Clipping is hard to trace, hard to tell apart from ordinary posting, and hard to eliminate without killing off much of the engagement that these platforms have come to depend on.
I asked TikTok, Instagram, and YouTube whether they were aware of the scale of clipping activity on their platforms and what, if anything, they planned to do about it. A TikTok spokesperson told me, “When we become aware of this type of violative content on TikTok, we take it down,” and confirmed that the company had taken down a batch of clips I’d sent over as evidence of a clipping campaign involving the country singer Kane Brown. (I’d long suspected he’d been cheating to outrank me in search results.) Instagram didn’t respond directly to me but did recently announce what looked like an oblique answer: The company would expand an existing rule so that “if you mostly share content from others that you didn’t make or meaningfully edit, your account won’t be recommended to people who don’t follow you.” (In many clipping campaigns, videos are, technically, “meaningfully edited,” so whether this rule will catch any of them is unclear.) A YouTube spokesperson, meanwhile, replied to me with a statement: “YouTube has long-standing rules to protect the integrity of our platform, and we continuously evaluate our policies to ensure they are in the right place.” The next day, a new clipping campaign appeared on one of the Discord servers I’d been watching — for Google I/O, the annual developer conference run by YouTube’s parent company.
Much of this is, by the way, at least theoretically illegal. In late 2024, the Federal Trade Commission adopted a rule that bans undisclosed endorsements, paid social-media posts that mimic those of normal users, and the operation of networks of accounts to artificially inflate the popularity of a product or person. Penalties run more than $50,000 per violation, which, if applied to just the campaigns I saw myself on Discord and Whop, would amount to enough money to buy all the social-media platforms and ruin them all over again in a whole new way. None of the clipping-agency operators I spoke to seemed concerned about this. None had ever heard of anyone in their industry being investigated by the FTC, much less fined. When I asked a spokesperson for the FTC whether the agency had any plans to take action against clippers, he replied, “Hi there, we’re not going to comment. Thanks.”
The thing that most bothered people about Chaotic Good Projects wasn’t clipping but a related service the company calls “narrative campaigns.” Clipping just puts an artist in front of more eyeballs; narrative campaigns tell those eyeballs what they’re seeing. Chaotic Good co-founder Jesse Coren explained the idea to Billboard at South by Southwest. “A lot of what we do on the narrative side is controlling the discourse,” he said. “Most people see a video or see something about an album that came out and it’s like the first thing that they see, or that first comment that they see, is their opinion even when they haven’t heard the whole album.” In other words, in a world drowning in information, nobody has the time to form an opinion from scratch anymore, so they check captions, comments, and quote tweets to see what people who seem like them have to say. And if everybody is outsourcing their first impressions to the crowd, why not just manufacture the crowd? Co-founder Andrew Spelman gave the example of a musical performance on Saturday Night Live: “The second SNL drops at midnight, you should post a hundred times saying that was the best performance of the year.”
Chaotic Good agreed to a phone interview with me and then canceled five minutes before our scheduled call. The company offered to take questions by email instead and a week later sent back answers attributed to all four of the company’s co-founders, many of which walked back things they’d already said. The SNL example had been “just a hypothetical example of social-media strategy around a key moment.” Narrative campaigns, they now claimed, “mostly consist of consulting on digital PR strategy.” Asked why every artist’s name had been removed from the company’s website, the co-founders wrote that it was “so our artist partners don’t get wrapped up in false accusations or misconceptions about how their music was discovered.”
Even some other clipping agencies find narrative campaigns distasteful. “I think there’s a massive fundamental difference between getting a bunch of volume posts up and astroturfing the comments to influence perception, like, ‘This is the best performance I’ve ever seen’ — that is bullshit,” says Ben Klein, the co-founder of Hundred Days, a Brooklyn-based digital marketing agency that provides some of the same services as Chaotic Good. “People aren’t dumb anymore, and they know what the truth is. They have eyes, they have ears, they have a gut, and they can just feel if something is manufactured.”
But Klein might be giving people too much credit. According to Keith Presley, the co-founder of Gudea, narrative campaigns are far more common and effective than the public knows. Gudea’s main business is using AI to detect coordinated activity on social media, and Presley says he and his team have observed these tactics being used across a wide range of subjects. “We’ve seen this used for stock manipulation, to promote skin-care brands, to shape conversations around AI, you name it,” he says. Many of Gudea’s clients are large companies looking to defend themselves against what he calls “corporate espionage” — paid narrative campaigns run by smaller competitors designed to damage a larger brand’s reputation just enough to make its customers defect. The thinking, says Presley, is that “if you have a bad opinion about Chips Ahoy! you still want your chocolate-chip cookie. And then you’ll just buy a different brand.” (Neither Chips Ahoy! nor its parent company Nabisco is a Gudea client.)
The same scheme works on people. The dominant technique now isn’t so much inventing a controversy from nothing as choosing which real minor outrage to fuel. Because you can usually find someone on the internet mad about almost anything, the job is mostly to choose which objection to amplify and how loudly. In one case, Gudea tracked a campaign promoting a rumor that the cover of Taylor Swift’s 2025 album The Life of a Showgirl contained Nazi symbolism that started in the fringes of X and Telegram before being amplified by what Gudea calls “non-typical accounts,” until regular users picked the rumor up and ran with it. Who would do such a thing? “Well, who’s trying to take down Taylor Swift so they can be the next Taylor Swift?” said Presley. (He did not offer a more specific guess.)
This type of digital subterfuge became known to the world last year during the legal fight between Blake Lively and Justin Baldoni, when court documents showed communications between Baldoni’s team and the crisis-PR firm the Agency Group, or TAG PR, describing a multilayered strategy designed to convince the public that Lively was difficult to work with and Baldoni was the aggrieved party. The proposal, according to those court documents, involved using subcontractors to manipulate SEO to “change subject-matter opinion on the first page of Google,” coordinate with social platforms and forum moderators to deemphasize or remove damaging posts, and seed Reddit with “threads with theories” favorable to Baldoni. “We can bury anyone,” TAG PR CEO Melissa Nathan wrote in one text message, adding that the work would be “untraceable.” The price tag for a four-month blitz against Lively was $175,000. Baldoni’s attorney denied that any smear campaign was ever run.
Most of the best-known work in this space relates to crises. Over the past decade, Nathan has worked, through various scandals, with clients including Brad Pitt, Drake, Travis Scott, Rebel Wilson, Logan Paul, and Johnny Depp. But it’s easy to imagine the same infrastructure being used to shape perceptions on matters with lower stakes. In text messages quoted in the court documents, Nathan tells Baldoni’s team about another contractor who, for $25,000 a month, offers “social fan engagement to go back and forth with any negative accounts, helping to change the narrative and stay on track.” Once you know that such a product exists, it’s hard not to think about it every time you see a sudden flood of enthusiastic posts about a famous person’s new project or haircut or outfit or relationship or face.
Some narrative campaigns don’t just push one side of an argument; they push both. That, Presley says, is what Gudea saw in the fuss leading up to Bad Bunny’s performance at the Super Bowl halftime show in February, which broke along predictable political lines: MAGA-aligned commentators complained about the NFL’s decision to hire a Spanish-speaking artist, and progressives pushed back. Gudea analyzed 3.7 million related social-media posts and found that fewer than 4 percent of the accounts in the conversation generated more than a quarter of the content. Also, the two opposing narratives were mirror images of each other in volume and posting cadence, suggesting that the same culprit may have been amplifying both sides of the fight. Gudea speculated that nation-state actors might have been responsible, but it’s not the only possibility or even the most intuitive one. By the time the controversy burned itself out, the NFL had gotten exactly what it wanted from a halftime show — a week of saturation coverage with a culturally divided country griping about its programming choices — while Bad Bunny got the kind of attention that even a global superstar can’t always buy directly. His representatives did not respond to requests for comment.
A similar pattern showed up in the stink over Sydney Sweeney’s American Eagle commercial last summer, in which the slogan — “Sydney Sweeney Has Great Jeans” — provoked accusations that the ad was promoting racial superiority, prompting others to mock the backlash, with Donald Trump eventually swooping in to defend Sweeney on Truth Social. The bot-detection firm Cyabra analyzed seven days of activity around the ad and determined that 15 percent of the TikTok accounts commenting on it were fake but had created a disproportionately large percentage of the uproar. “The public reaction wasn’t all fake,” Cyabra’s CEO, Dan Brahmy, says. “But it was amplified by inauthentic activity.” American Eagle, for its part, made little effort to defuse the situation, releasing a somewhat pointless statement (“great jeans look good on everyone”) days later. “They chose on purpose to essentially say, ‘It’s okay to have backlash,’” Brahmy says. “There was no such thing as bad publicity in that case.” During the controversy, American Eagle’s stock rose 10 percent, adding roughly $400 million in market value.
At a certain point, the distinction between celebrity nonsense and geopolitical information warfare breaks down. The same feeds that can turn a jeans commercial into a referendum on race can also carry foreign-influence campaigns disguised as normal posts. In September 2024, the Justice Department exposed a sprawling Russian operation known as Doppelgänger, which had been registering fake versions of real news sites with URLs like washingtonpost.pm, publishing plausible-looking articles — pro-Russian framings of the war in Ukraine, immigration scare stories, LGBTQ-targeted culture-war pieces, anti–Kamala Harris messaging — and then amplifying them through bogus social-media accounts posing as ordinary Americans. The point wasn’t just to spread propaganda but to make it look like something real that people had found, believed, and shared. The effect of all this is that every public argument big enough to be noticed now comes with a question attached: Is this legit, or did somebody just pay to make it look that way?
What all of this amounts to isn’t just one problem but a stack of them, each feeding the next. Most people now encounter the world through algorithmic feeds built to warp reality, on platforms with every commercial incentive to keep users scrolling and very little incentive to distinguish genuine interest from astroturfed imitations. Into those feeds flows an unprecedented amount of undisclosed advertising engineered to resemble the improvised enthusiasm of human strangers. The platforms reward it with reach; traditional media picks it up and validates it. Meanwhile, as trust in journalism collapses and most of the actual reporting disappears behind paywalls, readers head straight for the comment sections, which seem more like the voice of the people than anything written by a reporter — except many of those commenters may not be people at all.
The good news is that this will all be over soon, according to Lim, because something worse is coming to replace it. He recently shut down Floodify after trying to scale too fast and falling behind on deliverables. At one point, the company accidentally posted the same video to 7,000 accounts, which got them all banned. But he wasn’t discouraged. When we last spoke, he was building a new company and thinking even further ahead. “All of this nonsense is only going to last three to five more years, because in the future, people will stop trusting what they see on social media.” By then, the job will have moved one layer up. “You’ll have to start distributing your content toward AI agents and then they’ll teach humans what they want.”