


Instagram, owned by Meta Platforms Inc, announced it will notify parents if their teenagers repeatedly search for terms related to suicide or self-harm. The move comes as governments worldwide increase pressure to protect children online, following Australia’s ban on social media for under-16s.
Starting next week, parents who have signed up for Instagram’s optional supervision tools in the United States, Britain, Australia, and Canada will receive alerts if their children search for suicide or self-harm content. Alerts will include expert guidance to help parents navigate sensitive conversations. Other countries will follow later.
Instagram said these alerts complement its existing protections, which block harmful searches and direct users to support resources. Teen accounts for under 16s require parental permission for settings changes, with optional monitoring available.
However, child safety charities have criticised the measures. Molly Rose Foundation warned that such notifications may panic parents and could do more harm than good. CEO Andy Burrows said, “Parents might be ill-prepared for the sensitive conversations these alerts trigger.” Ian Russell, Molly’s father, also expressed concern about parental panic in real-time situations.
Other experts agreed that while the alerts are a positive step, Meta should focus on reducing harmful content and creating age-appropriate systems. Papyrus Prevention of Young Suicide highlighted that children remain exposed to dangerous content online and that preventive measures should not rely solely on parental alerts.
Instagram confirmed that alerts might occasionally trigger without real cause, as the system errs on the side of caution. Alerts will be sent via email, text, WhatsApp, or Instagram notifications depending on the family’s contact information. Meta plans to expand alerts to AI chatbot interactions, as more teens seek support from AI tools.
Social media companies are under increasing global scrutiny to protect young users. Countries including Spain, France, and the UK are considering limits similar to Australia’s ban. Regulators are also reviewing tech companies’ business practices affecting children.
Comment