Meta Introduces New Measures to Enhance Teen Safety on Instagram
Latest Move Aims to Combat Harmful Content and Protect Young Users
Meta, the parent company of Instagram, has announced a series of new features designed to enhance safety for teenagers on the social media platform. The initiative comes amidst growing concerns over the impact of harmful content and potential scams targeting young users.
The tech giant revealed plans to implement features that will blur messages containing nudity, a move aimed at safeguarding teens and preventing potential scammers from reaching them. These features, which will be tested on Instagram, mark a significant step in Meta's ongoing efforts to address safety concerns across its platforms.
Using on-device machine learning technology, Instagram will analyze direct messages to determine whether they contain nudity. This protection feature will be automatically activated for users under the age of 18. Meta also plans to notify adults, encouraging them to enable the feature to further enhance safety for young users.
One notable aspect of this initiative is that the nudity protection feature will function even in end-to-end encrypted chats. This means that Meta will not have access to the content of these messages unless they are reported by users. By prioritizing privacy and security, Meta aims to provide a safer environment for teens to interact on the platform.
In addition to nudity protection, Meta is developing technology to identify accounts that may be involved in sextortion scams. Furthermore, the company is testing new pop-up messages to alert users who may have interacted with such accounts, providing additional safeguards against potential risks.
These latest measures follow Meta's earlier commitment to hide sensitive content from teens on both Facebook and Instagram. By addressing issues such as suicide, self-harm, and eating disorders, Meta aims to create a more positive and supportive online environment for young users.
However, Meta's efforts to improve safety come amid increasing scrutiny from regulators and legal challenges. In the United States, attorneys general from 33 states, including California and New York, have filed lawsuits against the company, alleging that it misled the public about the dangers of its platforms. Similarly, the European Commission has sought information on how Meta protects children from illegal and harmful content, highlighting the global significance of these issues.
As Meta continues to navigate these challenges, its latest initiatives demonstrate a commitment to prioritizing user safety, particularly for young and vulnerable users on Instagram.
Also Read: Meta Introduces Advanced AI Chip for Enhanced Computing Power