Apple to deploy machine learning tools to flag child abuse content
NEW DELHI: Tech giant Apple is deploying new technology on its iOS, macOS, watchOS and iMessage platforms, which will detect child sexual abuse material (CSAM) and block them.
The company announced that its upcoming devices will have new cryptographic applications, which limit the spread of such imagery, while still protecting user privacy. The features will roll out with iOS 15, iPadOS 15, watchOS 8 and macOS Monterey.
According to Apple, the new child safety features have been developed in collaboration with child safety experts. They will include new communication tools that allow parents to “play a more informed role” in how children navigate communication online. “The message app will use on-device machine learning (ML) to warn about sensitive content while keeping private communications unreadable by Apple,” the company said on its Child Safety page.
“CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos,” it added. “CSAM Detection enables Apple to accurately identify and report iCloud users who store known Child Sexual Abuse Material (CSAM) in their iCloud Photos accounts. Apple servers flag accounts exceeding a threshold number of images that match a known database of CSAM image hashes so that Apple can provide relevant information to the National Center for Missing and Exploited Children (NCMEC),” the company said in a technical document.
Apple is also updating Siri and Search to provide “expanded information” and help when users encounter “unsafe situations”. “Siri and Search will also intervene when users try to search for CSAM-related topics,” the company said.
For privacy, the company said it won’t learn “anything about images that do not match the known CSAM database”, and it won’t be able to access metadata or visual derivatives for matched CSAM images. “The risk of the system incorrectly flagging an account is extremely low,” the company claims. “The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account,” it added.
Further, Apple won’t allow users to access or view the database of CSAM images and they won’t be able to identify which images are being flagged under CSAM either. The company will also manually review that its algorithms have detected the right match and then inform the NCMEC in the US. The violating user’s account will be blocked, and they will be allowed to appeal the ban if they think there’s a mistake.
It seems the feature will only be available in the US at launch.
While Apple isn’t the only company looking to use software for child safety online, the company’s new tools have raised some privacy concerns too. “Child exploitation is a serious problem, and Apple isn’t the first tech company to bend its privacy-protective stance in an attempt to combat it. But that choice will come at a high price for overall user privacy. Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor,” said Electronic Frontier Foundation, expressing its disappointment at the move.
Never miss a story! Stay connected and informed with Mint.
Download
our App Now!!
For all the latest Technology News Click Here