Meta lowered the minimum age for WhatsApp users in the U.K. and EU from 16 to 13 this week, provoking fierce criticism from campaign groups. The change, announced in February, came into effect on Wednesday, 12 April.

In the days since, the reaction has been intense, with one campaign group, Smartphone Free Childhood, saying the change “flies in the face of the growing national demand for big tech to do more to protect our children.” As spotted by The Guardian, the group went on to say, “Officially allowing anyone over the age of 12 to use their platform (the minimum age was 16 before today) sends a message that it’s safe for children. But teachers, parents and experts tell a very different story. As a community we’re fed up with the tech giants putting their shareholder profits before protecting our children.”

Meta made the change, it said, to bring it in line with that for the majority of countries and insisted that protections were in place for children, noting this week that it was putting new tools in place to “help protect against sextortion and intimate image abuse.”

Among these was something called Nudity Protection. In direct messages in Instagram, Meta will soon “start testing our new nudity protection feature in Instagram DMs, which blurs images detected as containing nudity and encourages people to think twice before sending nude images. This feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending their own images in return.”

The feature will be turned on by default for all users under 18.

This is similar to actions taken by other tech companies. Apple, for instance, has for more than two years had a system called Communication Safety that warns children, “when they receive or attempt to send images or videos containing nudity in Messages, AirDrop, Contact Posters in the Phone app, FaceTime video messages, and the system Photos picker.” Again, this is on by default for children but can be adjusted by a parent.

In Apple’s case, the child receiving the content sees a warning that the content may not be what they want to see and is reassured that it’s okay if they don’t want to view it. The analysis of the image is done on device.

There are other aspects to WhatsApp’s latest update. For a keen analysis of other implications, read Zak Doffman’s post here.

Share.
Exit mobile version