A concerned father says that after he used his Android smartphone to take pictures of an infection on his toddler’s groin, Google marked the images as child sexual abuse material (CSAM). a report of The New York Times. The company closed its accounts and filed a report with the National Center for Missing and Exploited Children (NCMEC) and urged a police investigation, highlighting the complications of trying to tell the difference between potential abuse and an innocent photo once it’s taken. becomes part of a user’s photo. digital library, both on their personal device and in cloud storage.
Concerns about the effects of blurring the boundaries on what should be considered private were raised last year when Apple announced its Child Safety plan. As part of the plan, Apple would locally scan images on Apple devices before uploading them to iCloud and then match the images to the NCMEC’s hashed database with known CSAM. If enough matches were found, a human moderator would review the content and lock the user’s account if it contained CSAM.
The Electronic Frontier Foundation (EFF), a nonprofit digital rights organization, slammed Apple’s planand said it “could open a back door to your private life” and that it “meaned a decrease in privacy for all iCloud Photos users, not an improvement.”
Apple finally put in the saved image scanning component on holdbut with the launch of iOS 15.2, it went on to include an optional feature for child accounts included in a family sharing plan. When parents sign up, the Messages app on a child’s account “analyzes image attachments and determines whether a photo contains nudity, while preserving the end-to-end encryption of the messages.” If it detects nudity, it blurs the image, displays a warning to the child, and presents them with resources intended to aid in online safety.
The main incident highlighted by The New York Times took place in February 2021, when some doctors’ offices were still closed due to the COVID-19 pandemic. As noted by the TimeMark (whose last name was not revealed) noticed swelling in his child’s genital region and, at the request of a nurse, sent images of the problem ahead of a video consultation. The doctor eventually prescribed antibiotics that cured the infection.
According to the NYTJust two days after taking the photos, Mark received a notification from Google saying that his accounts had been locked due to “harmful content” that was “a serious violation of Google’s policy and may have been illegal.”
Like many internet companies, including Facebook, Twitter and Reddit, Google has used hash-matching with Microsoft’s PhotoDNA to scan uploaded images to detect matches with known CSAM. In 2012, it led to the arrest of a man who was a registered sex offender and who used Gmail to send images of a young girl.
in 2018, Google has announced the launch of his Content Safety API AI Toolkit that “can proactively identify never-before-seen CSAM footage so that it can be reviewed and, if confirmed as CSAM, removed and reported as soon as possible.” It uses the tool for its own services and, along with a CSAI Match video targeting hash matching solution developed by YouTube engineers, also offers it for use by others.
We identify and report CSAM with trained specialist teams and advanced technology, including machine learning classifiers and hash-matching technology, which creates a “hash” or unique digital fingerprint for an image or a video so that it can be compared with hashes of the well-known CSAM. When we find CSAM, we report it to the National Center for Missing and Exploited Children (NCMEC), which liaises with law enforcement agencies around the world.
A Google spokesperson told the Time that Google only scans users’ personal images when a user takes “positive action”, which may apparently include backing up their photos to Google Photos. When Google flags exploitative images, the Time notes that Google is required by: federal law to report the potential violator to the CyberTipLine at the NCMEC. In 2021, Google 621,583 cases reported from CSAM to the CyberTipLine of the NCMEC, while the NCMEC warned the authorities of 4,260 potential victimsa list that the NYT says includes Mark’s son.
Mark ended up losing access to his emails, contacts, photos, and even his phone number because he was using Google Fi’s cellular service, the Time reports. Mark immediately tried to appeal Google’s decision, but Google rejected Mark’s request. The San Francisco Police Department, where Mark lives, opened an investigation into Mark in December 2021 and got their hands on all the information he had stored with Google. The investigator of the case eventually found that the incident “did not meet the elements of a crime and no crime occurred,” the… NYT notes.
“Child Sexual Abuse (CSAM) material is abhorrent and we are doing everything we can to prevent its spread on our platforms,” Google spokesperson Christa Muldoon said in an emailed statement to The edge. “We follow US law in defining what CSAM is and use a combination of hash-matching technology and artificial intelligence to identify it and remove it from our platforms. In addition, our team of child safety experts reviews flagged content for accuracy and consults with pediatricians to ensure we can identify instances where users may seek medical advice.”
While it’s undeniably important to protect children from abuse, critics argue that scanning a user’s photos unreasonably invades their privacy. Jon Callas, a director of technology projects at the EFF, called Google’s practices “intrusive” in a statement to the NYT. “This is exactly the nightmare we are all worried about,” Callas told the… NYT. “They’re going to scan my family album and I’ll be in trouble.”