A recent study by Internet Matters revealed that many children have been exposed to distressing content on social media platforms, such as reports on the deaths of Liam Payne and Charlie Kirk. The research indicated that 60% of kids who follow news on social media encountered stories that caused them worry or upset in June, covering topics like war, conflict, violence, death, and crisis events.
The non-profit organization highlighted that algorithms are inundating children’s social media feeds with graphic content that they find disturbing. Additionally, there are concerns about AI-generated content, with over a quarter (27%) of children admitting to falling for fake news stories.
The study highlighted that vulnerable children, including those with special educational needs or health conditions, are more susceptible to believing fake or AI-generated news. Approximately 43% of this group fell for false stories, compared to 23% of children not classified as vulnerable in the research.
One teenager shared her experience of being deceived by AI-generated content, stating that she had believed fake videos of natural disasters. The report cautioned that the proliferation of misinformation online can exacerbate social and political divisions and potentially incite real-world harm, such as the riots following the Southport murders.
The report urged social media companies to integrate media literacy into their platforms to assist children in evaluating and contextualizing the information they encounter. Rachel Huggins, co-chief executive of Internet Matters, emphasized the need to balance the benefits of immediate news access with the potential negative impacts on children’s well-being.
Jess Asato, a Labour MP, stressed the importance of equipping children with the skills to navigate the digital world safely and critically, alongside the upcoming regulations under the Online Safety Act. The research, conducted in July with 1,000 UK children aged 11-17, coincided with the introduction of Ofcom’s children’s codes to address harmful algorithms.
A government spokesperson highlighted the commitment to safeguarding young people from harmful online content through improved platform accountability and enhanced online safety skills for families. The government aims to enforce legal requirements to ensure safer algorithms and reduce exposure to toxic content for children.
