Tiktok and Meta ‘pushed harmful content’ on people’s feeds for views
TikTok and Meta made decisions which allowed harmful content on users’ feeds, whistleblowers have told the BBC.
It comes after research found that the tech giants made algorithms based on ‘outrage’.
More than a dozen whistleblowers and insiders from TikTok and Meta have exposed how the companies took risks with safety on issues including violence, sexual blackmail and terrorist recruitment while trying to maximise engagement.
A new documentary called Inside the Rage Machine, which will air tonight on the BBC at 9 pm, has explored how the industry promoted harmful content to increase views.
One engineer at Meta was allegedly told by senior management to allow more ‘borderline harmful content’, like misogyny and conspiracy theories, to compete with TikTok.
Sign up for all of the latest stories
Start your day informed with Metro's News Updates newsletter or get Breaking News alerts the moment it happens.
A TikTok employee told the BBC the company was told to prioritise removing reports involving politicians over a series of reports of harmful posts featuring children.
Meta said in a statement: ‘Any suggestion that we deliberately amplify harmful content for financial gain is wrong.’
TikTok said these were ‘fabricated claims’ and the company invested in technology that prevented harmful content from ever being viewed.
TikTok employees were ‘told to prioritise cases involving politicians’
Nick*, a member of TikTok’s trust and safety team, has shared internal documentation and spoken out about their experience while at the company.
He said that material linked to terrorism, sexual violence, physical violence, abuse, and trafficking is increasing, and the action taken against these videos is ‘different to what the sites are claiming’.
The BBC was shown evidence of how TikTok rated some trivial cases involving politicians as a higher priority for review by the safety team than several other cases involving harm to teenagers.
In one example, a political figure who was the subject of mockery – being compared to a chicken – was prioritised over a 17-year-old who reported being the victim of illegal cyberbullying and impersonation in France.
One case involved a 16-year-old in Iraq who complained that sexualised images purporting to be of her were being shared on the app.
‘If you look at the country where this report comes from , it’s a very high risk because it’s a minor and it involves sexual blackmail and then you can see the priority here. The urgency is not high at all,’ Nick said.
When the trust and safety team asked to prioritise cases involving young people over these political cases, the whistleblower said, they were told not to and to continue to deal with the cases according to the ranking they were given.
Nick said he sees this as the company not caring about children’s safety and prioritising relationships with politicians and governments.
The trust and safety employee had blunt advice to parents with children using TikTok: ‘Delete it, keep them as far away as possible from the app for as long as possible.’
A TikTok spokesperson told the BBC that the claims ‘ignore the reality of how TikTok enables millions to discover new interests, find community, and supports a thriving creator economy in the UK.’
The company said teen accounts have more than 50 preset safety features and settings, which are automatically turned on.
They added: ‘We invest in technology that helps prevent harmful content from ever being viewed, maintain strict recommendation policies and provide features for people to tailor their experiences.’
Ruofan Ding, who worked as a senior engineer building TikTok’s recommendation engine from 2020 until 2024, said that as TikTok tried to improve its algorithm almost every week to gain more market share, he started seeing more ‘borderline’ content or problematic posts.
Meta ‘maximised profits at the expense of their audience’s wellbeing’
A senior Meta researcher, Matt Motyl, said the company’s competitor to TikTok, Instagram Reels, was launched in 2020 without sufficient safeguards.
Internal research shared with the BBC showed comments on Reels had significantly higher prevalence of bullying and harassment, hate speech, violence or incitement than elsewhere on Instagram.
Motyl shared dozens of research documents from Meta, which appeared to show evidence that Facebook was aware of issues with its algorithm.
The algorithm offered users a ‘path that maximises profits at the expense of their audience’s wellbeing’.
Meta was struggling to prevent harm on Reels following its launch, according to one research paper he shared with the BBC.
It suggests Reels posts had a higher prevalence of harmful comments than posts on the main Instagram feed: 75% higher for bullying and harassment, 19% higher for hate speech, and 7% higher for violence and incitement.
A former engineer at Meta, Tim*, said more borderline harmful content was allowed as views lessened.
‘You’re losing to TikTok, and therefore your stock price must suffer. People started becoming paranoid and reactive, and they were like, let’s just do whatever we can to catch up. Where can we get like 2%, 3% revenue for the next quarter?’ Tim said.
He said that the decision to stop limiting content that was possibly harmful but not illegal – and that users were engaging with – was made by a senior vice-president of Meta who Tim believed reported directly to Mark Zuckerberg.
A Meta spokesperson denied the whistleblowers’ claims.
‘The truth is, we have strict policies to protect users on our platforms and have made significant investments in safety and security over the last decade,’ the spokesperson said.
The company said it has ‘made real changes to protect teens online’, including introducing a new Teen Accounts feature ‘with built-in protections and tools for parents to manage their teens’ experiences.’
Get in touch with our news team by emailing us at webnews@metro.co.uk.
For more stories like this, check our news page.