Meta Platforms Inc’s Instagram said it will begin notifying parents or guardians when teenage users repeatedly search for terms related to suicide or self-harm within a short period of time. The company said the alerts will apply only to parents who have enrolled in Instagram’s optional parental supervision tools. Messages will be sent by email, text message, WhatsApp, or in-app notification, depending on the contact information parents provide.
Instagram said the feature will be available initially in the United States, the United Kingdom, Australia, and Canada, with a broader rollout to additional countries planned later in 2026. Instagram already blocks content it identifies as related to suicide or self-harm from appearing in search results for accounts designated as belonging to teens and directs such users to online resources designed for mental health support.
Meta said the new alerts are designed to inform parents when a teen makes multiple such searches in rapid succession. The company said it chose what it described as an “appropriate threshold” to avoid notifying parents about isolated or accidental searches, but will notify them when repeated searches occur over a limited time. Instagram said the alerts will include links to support resources, suggesting ways parents can discuss sensitive topics related to suicide and self-harm with their children.
The company said it is also working on tools that would notify parents if teens attempt to engage with its artificial intelligence tools to ask questions about suicide and self-harm. Meta did not provide specific details on the number of searches or the exact criteria that will trigger a parent notification.
The initiative is part of Instagram’s broader efforts to address how users interact with suicide and self-harm-related content on its platforms. Instagram said it already uses automated systems to block suicide or self-harm content from search results and to show safety resources to users who encounter such material. The company said these systems operate regardless of whether parental supervision tools are in use, and the new notifications are intended to provide additional visibility for parents who choose to receive them.
Instagram said the support resources include links to organisations that provide advice on talking about emotional distress and crisis situations. The company said it will continue to develop measures that aim to keep young users safe while using its services.
Meta’s announcement of the alerts comes as the company faces legal challenges in the United States over its handling of youth and harmful content. Lawsuits against Meta in US courts allege that the company’s social media services, including Instagram, are designed in ways that can harm young people by fostering addiction or failing to shield minors from content that can negatively affect their well-being.
Meta has responded in legal filings and public statements by saying that research does not establish conclusive links between social media usage and harm to youth mental health. The company said it remains focused on applying research and developing tools it says are intended to protect minors on its platforms.
Instagram said it plans to update its policies and tools based on feedback and evolving research, but did not provide specific timelines for the broader global rollout or the release of additional alert features related to artificial intelligence interactions. The company said it will monitor how the parent notification feature performs in the initial markets and may refine the criteria that trigger alerts. Meta said it will continue to engage with external experts on developing approaches to user safety, particularly for teenage accounts