Content deleted Content added
→Characteristics: remove redundant, potentially promotional quotes |
→Characteristics: wikilink Adobe Stock (stock photos library) |
||
Line 35:
[[Apple Inc]] reported as early as August 2021 a [[child sexual abuse material]] (CSAM) system that they know as [[NeuralHash]]. A technical summary document, which nicely explains the system with copious diagrams and example photographs, offers that "Instead of scanning images [on corporate] [[iCloud]] [servers], the system performs on-device matching using a database of known CSAM image hashes provided by [the [[National Center for Missing and Exploited Children]]] (NCMEC) and other child-safety organizations."<ref name="apcsam">{{cite news |title=CSAM Detection - Technical Summary |url=https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf |publisher=Apple Inc |date=August 2021}}</ref>
In an essay entitled "The Problem With Perceptual Hashes", Oliver Kuederle produces a startling collision generated by a piece of commercial [[neural net]] software, of the NeuralHash type. A photographic portrait of a real woman ([[Adobe Stock]] #221271979) reduces through the test algorithm to a similar hash as the photograph of a butterfly painted in watercolor (from the "deposit photos" database). Both sample images are in commercial databases. Kuederle is concerned with collisions like this. "These cases will be manually reviewed. That is, according to Apple, an Apple employee will then look at your (flagged) pictures... Perceptual hashes are messy. When such algorithms are used to detect criminal activities, especially at Apple scale, many innocent people can potentially face serious problems... Needless to say, I’m quite worried about this."<ref name="rafok">{{cite news |last1=Kuederle |first1=Oliver |title=THE PROBLEM WITH PERCEPTUAL HASHES |url=https://rentafounder.com/the-problem-with-perceptual-hashes/ |access-date=23 May 2022 |publisher=rentafounder.com |date=n.d.}}</ref>
Researchers have continued to publish a comprehensive analysis entitled "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash", in which they investigate the vulnerability of NeuralHash as a representative of deep perceptual hashing algorithms to various attacks. Their results show that hash collisions between different images can be achieved with minor changes applied to the images. According to the authors, these results demonstrate the real chance of such attacks and enable the flagging and possible prosecution of innocent users. They also state that the detection of illegal material can easily be avoided, and the system be outsmarted by simple image transformations, such as provided by free-to-use image editors. The authors assume their results to apply to other deep perceptual hashing algorithms as well, questioning their overall effectiveness and functionality in applications such as [[client-side scanning]] and chat controls.<ref>{{cite book |doi=10.1145/3531146.3533073 |arxiv=2111.06628 |isbn=9781450393522 |s2cid=244102645 |chapter=Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash |title=2022 ACM Conference on Fairness Accountability and Transparency |date=2022 |last1=Struppek |first1=Lukas |last2=Hintersdorf |first2=Dominik |last3=Neider |first3=Daniel |last4=Kersting |first4=Kristian |pages=58–69 }}</ref>
|