Meta mayhem: why does Facebook keep censoring the Which? Scam Alerts group?

Platform notorious for scam adverts has repeatedly threatened to close our fraud prevention community

For years, users of Facebook and Instagram have complained to Which? about both the sheer number of scams on those platforms, and the failure of parent company Meta to remove them even after they’re reported.

Indeed, it’s something we experienced ourselves when we urged Meta for almost a year to remove a scam car-leasing firm that was stealing thousands of pounds from Instagram users. 

It refused, and the profile finally disappeared when we worked with BBC One’s Morning Live to expose it to a wider audience. Even then, we don’t know whether Meta or the scammers themselves removed it. 

And in November, news agency Reuters exposed internal Meta documents which projected that 10% of its 2024 earnings came from fraudulent ads

Meta claimed its estimate had been 'rough and overly inclusive' but declined to provide an updated figure. It told Which? that user reports about scam ads had declined 50% in the 15 months to November, and it said it had removed 134m pieces of scam ad content so far that year.

Outsmart the fraudsters

free newsletter

Sign up for our free Scam Alerts service.

Our Scam Alerts newsletter delivers scams-related content, along with other information about Which? Group products and services. We won't keep sending you the newsletter if you don't want it – unsubscribe whenever you want. Your data will be processed in accordance with our privacy notice.

Which? Scam Alerts group put 'at risk'

Given this background, you can imagine our frustration when the Which? Scam Action and Alerts Facebook group was repeatedly put under threat of closure.

Launched four years ago, the group exists to raise awareness of the latest scams and empower people to spot and avoid fraud attempts. With 42,000 members, we believe it’s the largest UK scam-prevention group on Facebook.

In January 2025, we received the first of multiple warnings on Facebook stating that the group ‘is at risk of being disabled, and has reduced distribution and other restrictions, due to Community Standards violations’.

The Which? social media team manages this community with strict criteria for approving posts, so we can keep everyone safe. For example, any scams shared for awareness and education purposes have to be in screengrab format and can’t include any links to the actual scam content.

Meta misfire

We initially contacted Meta through its support channels. It looked at our account, appeared to claim there were no restrictions against us and advised us to try basic troubleshooting tasks, such as clearing our cache or logging in from an incognito browser.

We did this, yet the message remained. It was only when we contacted Meta’s press office to demand an explanation that the threat was abruptly removed – almost three weeks after it had appeared.

The same threat of closure happened again in March and April. Both times the threat was lifted when we contacted Meta's press office.

We hoped that would be the end of the threats, but five months later in October, the group was suddenly suspended and taken offline by Meta. It took three days of urgent approaches to the Meta press office before it was reinstated.

Meta told us this had happened because fraud prevention posts can sometimes be misinterpreted by both automated systems and human reviewers.

It advised us to include captions in each post to indicate the post is highlighting a scam, but we've received no guarantees that this issue won't recur.

Platforms deluged by scams

This frustrating experience unfolded in the same year Meta announced it would stop working with independent fact-checkers to monitor content in the US. 

Although this doesn’t apply in the UK, it’s not something I’d want to see replicated here. Having investigated fraud for almost a decade, it’s clear from my experience that Meta platforms are still deluged with scams – even with the Online Safety Act now in force.

Many of the scams shared on our group emanate from Meta's own platforms – Facebook, Instagram and WhatsApp.

As Meta seems to struggle with taking appropriate action against harmful content – and takes inappropriate action against genuine content – it’s baffling to see it shrug off the help of experts across the pond.

As one of the biggest tech companies in the world, Meta should be capable of designing a moderation system that can tell the difference between actual scam content, and the fraud prevention content which protects people from it.

Does Meta really not know how, or does it simply not care enough?