TikTok has another problem to add to its growing pile: Italy’s consumer watchdog has opened an investigation over user safety concerns — stepping in after a so-called “French scar” challenge went viral on the video sharing platform in which users have been seen apparently pinching their faces in order to create and show off red lines as mock scars. (Yes, really.)
In a press release today, the AGCM accused TikTok of lacking an adequate moderation systems for user generated content, asserting that it’s failing to uphold community guidelines set out in its T&Cs where it claims to remove dangerous content, such as posts inciting suicide, self-harm and eating disorders. But apparently pinching yourself doesn’t make the bar.
The AGCM’s investigation is targeting the Irish company, TikTok Technology Limited, which it says handles the platform’s European consumer relations, as well as English and Italian TikTok entities. And it said it carried out an inspection at the Italian headquarters of TikTok today, aided by the Special Antitrust Unit of the Guardia di Finanza.
The authority said it decided to look into TikTok after numerous videos of teens emerged engaging in “self-injurious behavior” — including the aforementioned “French scar” challenge, which last month led to a number of warnings from dermatologists that the activity could lead to permanent marks or redness.
The AGCM said it’s concerned TikTok has not set up adequate content monitoring systems, especially given the presence of particularly vulnerable users such as minors. It is also accusing the platform of failing to apply its own rules and remove dangerous content that its T&Cs claim is not allowed.
Additionally, it wants to look into the role of TikTok’s artificial intelligence in spreading the problematic challenge.
The platform famously uses AI to select content shown to users in the ‘For you’ feed, which is ‘personalized’ based on TikTok’s tracking and profiling of users, including factoring in signals like other similar content they’ve viewed or otherwise interacted with through the like function. Although how it works exactly is a commercially guarded secret. So one question to consider is how much of a role TikTok’s algorithm had in amplifying and spreading this potentially harmful challenge?
We’ve reached out to the AGCM with questions, including whether it intends to audit the TikTok algorithm.
TikTok was also contacted for comment on the investigation — but at the time of writing it had not responded.
It’s not the first time TikTok safety concerns have triggered action by Italian regulators: Back in 2021, the data protection watchdog stepped in over child safety concerns linked to a “blackout” challenge apparently doing the rounds on the platform — after local media reported the death of an underage user (a ten-year-old girl). That intervention led TikTok to delete over half a million accounts which it was unable to verify did not belong to children.
The same regulator was also quick to warn TikTok against making a planned privacy policy switch last summer, after the platform had said it would stop asking user consent for its ad targeting. After other DPAs also intervened it went on to drop the plan.
More recently TikTok has been fighting a growing tide of national security concerns that have led a number of Western governments to ban their staff from using the app on official devices. And the Biden administration has been further amping up pressure by threatening a total U.S. ban of the app if the company doesn’t split with its Chinese ownership.
TikTok ‘French Scar’ challenge triggers safety probe in Italy by Natasha Lomas originally published on TechCrunch