Content deleted Content added
m →See also: Add a related topic / wiki page |
add its vulnerability to adversarial attacks |
||
Line 111:
* [[Fourier transform]]
* [[Image moment|Moment invariant]]
== Vulnerabilities, attacks and defenses ==
Like other tasks in [[computer vision]] such as recognition and detection, recent neural network based retrieval algorithms are susceptible to [[generative adversarial network | adversarial attacks]], both as candidate and the query attacks<ref name="Zhou Niu Wang Zhang 2020">{{cite web | last=Zhou | first=Mo | last2=Niu | first2=Zhenxing | last3=Wang | first3=Le | last4=Zhang | first4=Qilin | last5=Hua | first5=Gang | title=Adversarial Ranking Attack and Defense | website=European Conference on Computer Vision (ECCV 2020) | url=https://arxiv.org/pdf/2002.11293v2.pdf }}</ref>. It is shown that retrieved ranking could be dramatically altered with only small perturbations imperceptible to human beings. In addition, model-agnostic transferable adversarial examples are also possible, which enables black-box adversarial attacks on deep ranking systems without requiring access to their underlying implementations<ref name="Zhou Niu Wang Zhang 2020">{{cite web | last=Zhou | first=Mo | last2=Niu | first2=Zhenxing | last3=Wang | first3=Le | last4=Zhang | first4=Qilin | last5=Hua | first5=Gang | title=Adversarial Ranking Attack and Defense | website=European Conference on Computer Vision (ECCV 2020) | url=https://arxiv.org/pdf/2002.11293v2.pdf }}</ref><ref name="Li Ji Liu Hong pp. 4899–4908">{{cite web | last=Li | first=Jie | last2=Ji | first2=Rongrong | last3=Liu | first3=Hong | last4=Hong | first4=Xiaopeng | last5=Gao | first5=Yue | last6=Tian | first6=Qi | title=Universal Perturbation Attack Against Image Retrieval | website=International Conference on Computer Vision (ICCV 2019) | url=https://openaccess.thecvf.com/content_ICCV_2019/html/Li_Universal_Perturbation_Attack_Against_Image_Retrieval_ICCV_2019_paper.html | pages=4899–4908}}</ref>.
Conversely, the resistance to such attacks can be improved via adversarial defenses such as the Madry defense<ref name="Madry Makelov Schmidt Tsipras 2017">{{cite web | last=Madry | first=Aleksander | last2=Makelov | first2=Aleksandar | last3=Schmidt | first3=Ludwig | last4=Tsipras | first4=Dimitris | last5=Vladu | first5=Adrian | title=Towards Deep Learning Models Resistant to Adversarial Attacks | website=arXiv.org | date=2017-06-19 | url=https://arxiv.org/abs/1706.06083v4 }}</ref>.
==Image retrieval evaluation==
|