TikTok Removes 3.6M Nigerian Videos As Crackdown & Critic Intensify
TikTok has acted amid increased scrutiny over content safety on its platform by removing over 3.6 million Nigerian videos in the first quarter of 2025 (a 50% jump from Q4 2024), the company revealed today, as part of its intensified efforts to enforce content moderation and improve platform safety. Globally, TikTok eliminated more than 211 million videos during the quarter, with automation responsible for the majority of removals.
The company claims 98.4% of dismissed content was proactively detected before user reports, with 92.1% removed within 24 hours, figures it says underscore its AI-powered moderation capabilities. In Nigeria, that included shutting down over 42,000 TikTok LIVE rooms and nearly 50,000 individual livestreams, and removing 129 West African accounts linked to covert influence operations.
But amid the commendations, critics’ concerns linger. Digital rights advocates note that TikTok’s opaque algorithms may overreach. Users from marginalised groups have reported suppression even without explicit violations, and content flagged as political satire has occasionally been removed as misinformation.
TikTok is massively popular and culturally relevant in Nigeria with 37+ million active users, consistently ranking among the top 4 most downloaded apps. Primarily driven by users under 35, it’s vital for entertainment, music promotion, trends, and social commentary. In Nigeria, creators have even faced legal action over political or religious satire—two TikTokers were jailed in Kano after posting a satirical video about a governor.
Further afield, critics say TikTok favours engagement over nuance, citing global shortcomings in tackling extremist content and algorithmic bias. Governments have banned or restricted TikTok over national security and censorship fears, including bans on official devices in NATO and EU agencies, and a one-year ban announced by Albania from 2025.
Yet, TikTok pushes back, seeking balance. TikTok defends its moderation strategy, pointing to its “proactive detection” and rapid takedown efforts. It has revamped its Content Advisory Council in the US to include free‑speech advocates, and introduced features like “Footnotes” to contextualise contentious content rather than remove it outright.
In Nigeria, the company is complementing takedown numbers with educational and support initiatives. In June, it launched in‑app helplines through a partnership with Cece Yara, and appointed Dr. Olawale Ogunlana as a Digital Well‑being Ambassador.
However, experts caution on censorship risks. Observers warn that even well‑intentioned moderation can chill speech, particularly in countries facing proposed legislation like Nigeria’s controversial “Anti‑Social Media Bill.” Transparency advocates argue that rigid algorithms and unclear policies risk punishing legitimate political and cultural expression.
TikTok portrays the surge in removals as evidence of its commitment to safety, but in doing so, it fuels concerns over automated censorship, algorithmic bias, and the broader societal impact of its content control. Whether its expanded moderation strikes the right balance between protection and free expression remains a hotly debated issue.