Bombshell Report Exposes How Meta Relied On Scam Ad Profits To Fund AI (reuters.com) 59
"Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them," writes Ars Technica, citing a lengthy report from Reuters.
Reuters reports that Meta "for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products..." On average, one December 2024 document notes, the company shows its platforms' users an estimated 15 billion "higher risk" scam advertisements — those that show clear signs of being fraudulent — every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states. Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems.
But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain — but still believes the advertiser is a likely scammer — Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads. The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests... The documents indicate that Meta's own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company's platforms were involved in a third of all successful scams in the U.S.
Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms... The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads. In the meantime, Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that "present higher legal risk," the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds "the cost of any regulatory settlement involving scam ads...."
A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...." Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them. Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports, according to another 2023 document.
A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders — known as "High Value Accounts" — could accrue more than 500 strikes without Meta shutting them down, other documents say.
Thanks to long-time Slashdot reader schwit1 for sharing the article.
Reuters reports that Meta "for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products..." On average, one December 2024 document notes, the company shows its platforms' users an estimated 15 billion "higher risk" scam advertisements — those that show clear signs of being fraudulent — every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states. Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems.
But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain — but still believes the advertiser is a likely scammer — Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads. The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests... The documents indicate that Meta's own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company's platforms were involved in a third of all successful scams in the U.S.
Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms... The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads. In the meantime, Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that "present higher legal risk," the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds "the cost of any regulatory settlement involving scam ads...."
A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...." Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them. Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports, according to another 2023 document.
A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders — known as "High Value Accounts" — could accrue more than 500 strikes without Meta shutting them down, other documents say.
Thanks to long-time Slashdot reader schwit1 for sharing the article.
Interests (Score:5, Funny)
The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests.
I see you're interested in scams, let me send more to you.
Re: (Score:2)
This isn't an algorithm problem (Score:3)
Re: (Score:2)
Precisely. Algorithms don't have a moral compass.
Same as Meta[stasize] employees.
"Bombshell Report" (Score:5, Insightful)
What does "Bombshell" even mean? That we're supposed be outraged at standard business practice amongst scammers? Meta is a scam operation, the "outrage" can start and end right there, and it begs the question, since we all know what they are, why do we even give them our business?
Re: "Bombshell Report" (Score:4, Insightful)
I think most people assume that a legally authorized business is operating above board and not operating scams. I don't know why people assume the best of capitalism when it keeps delivering the worst.
Comment removed (Score:4, Interesting)
Re: "Bombshell Report" (Score:1)
Sure, for those committing the act - any press is good press.
Re: (Score:2)
Re: (Score:1)
Only if you only look at information packaged as entertainment.
Re: (Score:2)
Re: Still beats Communism (Score:3)
Re: "Bombshell Report" (Score:4, Insightful)
Re: "Bombshell Report" (Score:5, Funny)
Guiding Mother Nature's hand (Score:2)
A few generations of purges for the bourgeoisie should work to apply useful selection pressure.
Re: Guiding Mother Nature's hand (Score:2)
RTFA (Score:3)
Those leaked communications also say that they made sure to let the scams go so that they could keep getting money from them. All told 16 billion dollars.
At that point you're just plain complicit in the sc
Re:RTFA (Score:4, Interesting)
Theres a simple way to clean this shit up that would end this kind of thing overnight.
Put Meta financially on the hook for these scams. If facebook convinces some poor old lady with low tech literacy to hand over her retirement fund, then facebook should be on the hook to repay her. Facebook then can, at their own leisure, chase the scammer down to pay them back, but facebook must be held accountable.
Do that, and those scam ads end overnight.
Re: (Score:3)
> why do we even give them our business?
Network. Arguably they're not doing much that you can't do with a newsletter. Want to "post" something? Just send an email to your friends to let them know. Want to see what your friends are up to, ready your mail.
Problem is, that isn't as much fun is it. There's no infinite scroll with mail.
However, your life will probably feel better if you stop using Facebook, their algorithm prioritises things that rage you. There's a post on /. about that somewhere.
Re: "Bombshell Report" (Score:2)
Doesn't really scale, especially if you are sharing videos. Most ISPs have some really limits. Maybe at least use NNTP. You could get larve files that way. Intranet NNTP was cool back in the day. But you can't control granular access.
Re: (Score:3)
It scales, if the thing you're sharing is a link rather than an attachment. You can use personal hosting or drop box, or google drive, or whatever if you want to host the file yourself.
You can't really control granular access via typical social media, anyone in your group can download and repost. You don't own the data once you put it on the internet so I don't see that as an argument for scalability.
I'll stick by my original claim, just use a mail group and don't give in to the advertisers!
Re: (Score:2)
Online storage is usually not free. The free hosts tend to delete your content over time. I have many terabytes worth of personal photos and videos. Basically none of it is online, for reasons of both cost and privacy & security.
On FB at least, you can share media with a set of specific individuals - or at least you used to, last time I checked. They don't have to belong to an FB group. You can create your own list - ie. family members, local friends, etc.
My family now uses a Whatsapp group mostly (yes,
Re: (Score:2)
Nothing stops a member screenshotting, or ctrl-a/ctrl-c, that's what I mean, sure, they can't just join and be included, but the data is out there.
Self-hosting is the type of hosting I'm referring to, using Google Drive drop box etc, are much like what's app, there's nothing stopping a whatapp end-of-life in the same way Yahoo Groups evaporated when Yahoo didn't make as much money from it as they wanted.
Stick to email, it'll outlive WhatsApp.
Re: (Score:2)
Yes of course, any content can be shared by anyone who has access to it. I was talking about the ability to participate in conversations and exchange with only a selected set of members. This is somewhat easier to do with FB than with Whatsapp. You can for example share a post with a specific visibility, which will (may) only show on some people's feeds. New entrants in a Whatsapp group chat cannot see the previous content, for example, even though that is sometimes desirable.
Re: (Score:2)
Ok but that sounds like cc or a newsletter with addresses that you want :)
Re: (Score:2)
Yes, but a lot easier to manage than a very long cc list, I think. The e-mail addresses don't usually have a directory where you can see another recipient's face, profile, etc. Anyway, e-mail security is pretty fucked. I once tried to get my mom to use S/MIME with Mozilla suite. It all went great - until the day she had to renew her certificate, sigh.
Re: (Score:2)
Yeah.. manufactured scandal..
Probably a slow press day
Re: (Score:2)
It doesn't beg the question. It raises the question. Begging the question is something else.
Facebook is a massive, ongoing criminal conspiracy (Score:5, Insightful)
Re: (Score:2)
Facebook is a massive, ongoing criminal conspiracy.
Conspiracy implies a degree of secrecy.
Ignoring reports (Score:4, Insightful)
Glad to read it wasn't just my ad reports that got a "we saw nothing wrong, we took no action" review. Pigs.
They even had the courage to ask me for a feedback opinion about my having control over what I see. I responded quite sincerely of course. Suines.
Bombshell My A** (Score:3)
Color me not-surprised.
The longer this goes on the happier I am that I never opened a Farcebook (spelling deliberate) account. We use WhatsApp to communicate with family in Peru, but we had those accounts long before the company was bought out.
There have been no penalties ... yet (Score:5, Insightful)
This summary doesn't explain that although there are ongoing government efforts to penalize deceptive third-party ad delivery, there have been no penalties levied against any content/ad delivery platform. The DCA shield provision is a big factor is shifting the blame for scam ads entirely away from the delivery platform and towards the ad creators. So, the supposed $1 billion fine is just a fanciful idea so far.
The current government efforts center around violations of the FTC Act surrounding unfair or deceptive acts or practices in commerce. But this is a murky area that would have to create new case history in terms of prosecution. Perhaps the greatest obstacle to all this is the reality that many of the ads across all mediums and delivery companies are scams to some extent. Some are scams that try to steal money directly, but many are scams for products or services that don't work. It's arguably obvious to many that a lot of the TV ads, especially for less popular shows, are for products and services that don't work or that are extremely overpriced. These are scams to some extent, and the TV stations should know this, but they need that revenue to survive. I'm not arguing in support of these ads and stations, but just saying that this is very widespread and has many nuances.
Re: There have been no penalties ... yet (Score:1)
It's almost as if everything in life is a scam. If you can persuade others to give you money quicker than others take it from you as a result of negative feedback for your actions, it's a win!
Like this is a meta (Score:1)
Facebook is a wretched hive of scams and villainy (Score:5, Informative)
Scamming is a hard problem to solve entirely, but the scale of it on Facebook makes it completely unsurprising to hear that Facebook did the math and decided to optimize for free cash flow. Facebook's real customers are the scammers, to whom Facebook sells its end-users.
There's been a never-ending stream of advertisements for random $500-$5000 products, ostensibly priced at just $39.99. For example, there have been dozens of "companies" (probably all the same scammer) pretending to sell kinetic sculptures by Anthony Howe. They started a few years ago and I last saw one about a month ago... https://www.facebook.com/antho... [facebook.com]
Last summer there was a batch of advertisements for aftermarket car-stuff brands (Summit Racing, BluePrint Engines, a couple others) whose multi-thousand-dollar products were being "cleared out" for half price. The scammers took artwork from the real sites and created clones that looked completely believable, other than the prices. I assume this happens in other niches too, that's just the niche that Facebook served up to me.
Then there are the scammers to that create buy-and-sell groups on Facebook... They attract some legit usage as camouflage, then post scams, and delete/ban the users who points out the scamming. I'm not in any of those groups, but the pattern is so pervasive that people are always complaining about it in the groups that I am in.
How's "fund AI" related beside being clickbait? (Score:2)
Of course any revenue, including the shadiest one would be used to fund the expenses, but why not say that it paid dividends or bonuses for the execs or whatever lobby they're doing (I assume there's a fund for that (I mean a publicly known one).
American's love scams! (Score:5, Informative)
So much so we elected a guy who partakes in them against his own populace.
Typed on my Trump phone (which hasn't shipped), time confirmed on my Trump watch (which is a Chinese watch marked up like 1500%) all while I bought Trump and Melania meme coins via World Liberty Financial which I decided to buy after increasing my brain capacity with the pills Joe Rogan and Ben Shapiro said were great.
Re: (Score:3, Informative)
So much so we elected a guy who partakes in them against his own populace.
Typed on my Trump phone (which hasn't shipped), time confirmed on my Trump watch (which is a Chinese watch marked up like 1500%) all while I bought Trump and Melania meme coins via World Liberty Financial which I decided to buy after increasing my brain capacity with the pills Joe Rogan and Ben Shapiro said were great.
After attending Trump University ... :-)
And... pretty soon you'll be able to get your pills through TrumpRx [trumprx.gov] -- not making that up. (*heavy-sigh*)
Re: American's love scams! (Score:1)
Black Mirror are like "What do we do now? Everything we can think of, Trump's already doing?"
Long live FB Purity (Score:4, Informative)
seems like the article needs a couple corrections (Score:2)
The idea is to dissuade suspect advertisers from placing ads.
no, the idea is to not interrupt the flow of money into the pockets of mr. buckerberg
The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads.
the word plans should be enclosed in quotes
meta's real logo (Score:1)
fraud: it's what's for dinner
and for facebook:
you'll come for the cats, you'll stay for the scams
Misdirected investment (Score:2)
Clearly, Meta is wasting its time investing in AI. It would get a far better return on investment if it redirected all that cash into finding better ways to scam. After all, scamming has been proven profitable.
Re: Misdirected investment (Score:1)
First-task for the newly-born AIs? Make variations of existing scams to maximize believability, virality and deniability.
catch-22 (Score:1)
Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports
they may aspire to do better. but how can they do so when
safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...."
kafkaesque...
Not really news (Score:1)
\o/ (Score:1)
There's still substantial room for improvement to maximally appeal to scammers as target customers but it's a great start.
I particularly liked the way sub-95%-confidence scammers were identified as those who would pay more and leveraged - most creative:-)
Scammers: we got our eye on you! (Score:2)
Correction: We've got our A.I. on you!
Crime pays after all (Score:2)
The only way to stop it is to fine them double, or even tripple of what they've earned from malvertising.
Doesn't it make them complicit in crime? (Score:3)
If I produced a piece of malware which sould then be used by someone else to extort people, I'd be charged with soliciting crime.
If they allow known scammers to scam people, likewise l, it makes them complicit and criminally punishable.
Since directors are usually personally liable for company actions, why don't they send Zuck to prison as an example?
With such a face he's probably guilty of more than one crime anyway.
No kidding? (Score:4, Interesting)
Plutocrats always get a slap on wrist (Score:3)
The fines should greatly multiply per infraction. For example first infraction would be 1b according to this. So the second should be say 10b, the third 100b, etc. And the first infraction requires them to be under more scrutiny, which they are to compensate the gov't for.
Re: (Score:2)
Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta’s internal warning systems. But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain – but still believes the advertiser is a likely scammer – Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads.
They charge the scammers extra for ads, everytime they get flagged.
So they already doing it to earn extra money.