Content deleted Content added
m Added a link to PhotoDNA Wikipedia page |
removing the gallery since it's unclear what purpose it serves (added in Special:Diff/1086461935 with no explanation) |
||
Line 31:
Researchers have continued to publish a comprehensive analysis entitled "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash", in which they investigate the vulnerability of NeuralHash as a representative of deep perceptual hashing algorithms to various attacks. Their results show that hash collisions between different images can be achieved with minor changes applied to the images. According to the authors, these results demonstrate the real chance of such attacks and enable the flagging and possible prosecution of innocent users. They also state that the detection of illegal material can easily be avoided, and the system be outsmarted by simple image transformations, such as provided by free-to-use image editors. The authors assume their results to apply to other deep perceptual hashing algorithms as well, questioning their overall effectiveness and functionality in applications such as [[client-side scanning]] and chat controls.<ref>{{cite book |last1=Struppek |first1=Lukas |last2=Hintersdorf |first2=Dominik |last3=Neider |first3=Daniel |last4=Kersting |first4=Kristian |title=2022 ACM Conference on Fairness, Accountability, and Transparency |chapter=Learning to Break Deep Perceptual Hashing: The Use Case Neural ''Hash'' |year=2022 |pages=58–69 |publisher=Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT) |doi=10.1145/3531146.3533073 |arxiv=2111.06628 |isbn=9781450393522 |s2cid=244102645 }}</ref>
==See also==
|