Meta Removes 6.8 Million WhatsApp Accounts Linked to Pig Butchering Scam Rings
Meta has removed more than 6.8 million WhatsApp accounts linked to “pig butchering” scam operations run by organized crime syndicates in Southeast Asia this year, part of a broader effort to clamp down on criminals trying to steal crypto from victims.
Typically, pig butchering scams begin with an unsolicited message and escalate into conversations that shift onto encrypted messaging apps or private chats. The end goal is to convince victims to transfer money—often crypto—to fake businesses or “investment” platforms.
Victims usually realize too late that their deposits can’t be withdrawn.
“We proactively detected and took down accounts before scam centers were able to operationalize them,” the company said in a press release.
Meta linked the account to scam networks operating out of countries like Cambodia, Myanmar and Thailand, where law enforcement has reported that criminal groups carry out mass fraud campaigns targeting global victims.
The recent enforcement push is designed to disrupt these groups before they can begin victimizing users.
WhatsApp is also rolling out new tools to help users spot and report suspicious activity. One such feature will alert users when they are added to a group by someone not in their contact list, a common tactic used by fraudsters promoting fake investment schemes.
The announcement comes amid growing calls for social media and messaging apps to take a more proactive approach to stopping scammers using these platforms to reach and exploit victims, often at scale.
According to the FBI’s Internet Crime Complaint Center (IC3), $9.3 billion was lost to online scams in 2024—marking a record high.
Cryptocurrency scams alone accounted for more than $3.9 billion of that total, with elderly users particularly hard hit. With that number only based on reports made to the FBI, the real total is likely significantly higher.
Growing Crypto Scam Problem
Many of these scams began on messaging platforms like WhatsApp, Facebook Messenger or Telegram.
Meta cited one recent case in which it collaborated with OpenAI to disrupt a Cambodian group running a rent-a-scooter pyramid scheme. The scammers reportedly used ChatGPT to craft instructions for victims and recruited people with fake offers of money in exchange for social media engagement.
Authorities globally have ramped up warnings in recent months, urging users to enable two-step verification on WhatsApp and to be suspicious of any odd messages or unexpected group invites.
Still, critics argue that platforms like Meta need to take stronger, more systemic action.
In a blog last month, senior vice president fraud reduction at the Banking Policy Institute, Greg Williamson said that the incentives for social media giants to crack down on scammers are misaligned.
“To effectively combat scams, tech platforms must prioritize customer protection. They are in a strong position to prevent abuse, but their incentives often work against proactive action,” he wrote.
He noted that social media platforms earn ad revenue from scam content, and highlighted one ongoing case where Meta is accused of allowing over 230,000 scam ads to run on its platforms featuring a deepfake of Australian billionaire Andrew Forrest.
Deepfakes featuring everyone from Elon Musk to King Charles III have also been shared on social media to entice people into making investments. Those who are impersonated in these deceptive ads have reported struggling to get Meta to take them down.
Scammers purchase advertising from the likes of Meta to help spread their posts.
“These companies have the capability, but not the financial incentive, to prevent fraud at the source,” Williamson added.