Just a few days ago Apple announced that it will soon deploy a new strategy to prevent the spread of child pornography by implementing a scanning mechanism in iCloud Photos accounts. It will use information provided by the US National Center for Missing and Exploited Children, which will provide cryptographic hashes of known images so that Apple can search for matches and alert authorities.

The system designed by Apple incorporates a number of safeguards to prevent abuse. For example, it does not generate any information from images that do not match the hashes in question and alerts are sent manually to authorities rather than using automated systems. However, privacy and cybersecurity experts warn that this technology has significant risks, particularly in terms of abuse by governments.

The Electronic Frontier Foundation (EFF) has been one of the quickest to respond. According to the NGO dedicated to protecting freedom of expression on the internet, “iMessage and iCloud filtering is not a slippery slope that opens back doors” to the suppression of opinions, but a “fully integrated system that is just waiting” for someone to exploit it for other purposes.

The EFF’s fear is that the hashing system, which now only detects pornographic images based on matches (a hash is a unique alphanumeric value generated by a file; technically Apple does not “look” at the contents of photos per se) could be used to search for other “harmful” content. Since the technology is already there, a sufficiently determined government would have little trouble asking (or requiring) Apple to perform other kinds of searches.

WhatsApp leader Will Cathcart also expressed concern shortly after the news broke, claiming that this is a “surveillance system created and operated by Apple that could very easily be used to scan private content”. According to Cathcart (who has his own interests, as should be obvious), WhatsApp’s system, which relies on user reports, “does not break encryption” and is an acceptable alternative, unlike the one proposed by Apple.

More recently, Tim Sweeney has stated without mentioning Apple that “we should abandon the temptation to improvise dictates on users’ lives through the odd mix of populism, PR and profit-seeking”. Epic’s CEO believes that the answer to today’s problems should not be for “corporations to reign over the entire Internet”, and that the old principles of communications protection, created “back when government was the most powerful force in people’s lives” should serve as a guide today.

That said, it is important to note that the system described above applies to files hosted on iCloud. In the case of images sent via Messages, Apple uses an entirely different approach, and one that is confined to families, without actually alerting the authorities. Instead of hashes, the technology used by Apple detects “sexually explicit images” (presumably through AI), blurring them to prevent direct viewing and alerting the child sending or receiving the file. It is also possible to alert parents of children aged 12 or younger to be aware that they may be engaging in inappropriate and/or dangerous conversations.

It should be noted that the process of scanning for child pornography using hashes is not entirely new. Gmail and Facebook use similar systems, and not long ago the European Parliament approved the use of such tools, although in the latter case they are voluntary and their use must be governed by the rulings of each country’s data protection authorities.

LEAVE A REPLY

Please enter your comment!
Please enter your name here