Content deleted Content added
Tag: Reverted |
|||
Line 363:
Researchers have demonstrated how [[Backdoor (computing)|backdoors]] can be placed undetectably into classifying (e.g., for categories "spam" and well-visible "not spam" of posts) machine learning models that are often developed or trained by third parties. Parties can change the classification of any input, including in cases for which a type of [[algorithmic transparency|data/software transparency]] is provided, possibly including [[white-box testing|white-box access]].<ref>{{cite news |title=Machine-learning models vulnerable to undetectable backdoors |url=https://www.theregister.com/2022/04/21/machine_learning_models_backdoors/ |access-date=13 May 2022 |work=[[The Register]] |language=en |archive-date=13 May 2022 |archive-url=https://web.archive.org/web/20220513171215/https://www.theregister.com/2022/04/21/machine_learning_models_backdoors/ |url-status=live }}</ref><ref>{{cite news |title=Undetectable Backdoors Plantable In Any Machine-Learning Algorithm |url=https://spectrum.ieee.org/machine-learningbackdoor |access-date=13 May 2022 |work=[[IEEE Spectrum]] |date=10 May 2022 |language=en |archive-date=11 May 2022 |archive-url=https://web.archive.org/web/20220511152052/https://spectrum.ieee.org/machine-learningbackdoor |url-status=live }}</ref><ref>{{Cite arXiv|last1=Goldwasser |first1=Shafi |last2=Kim |first2=Michael P. |last3=Vaikuntanathan |first3=Vinod |last4=Zamir |first4=Or |title=Planting Undetectable Backdoors in Machine Learning Models |date=14 April 2022|class=cs.LG |eprint=2204.06974 }}</ref>
Researchers have also criticized conventional machine learning for relying heavily on differentiable architectures and statistical loss functions, arguing that these may overlook deeper algorithmic or causal structures. Building on [[Algorithmic information theory]](AIT), Hernández-Orozco et al. (2021)<ref name="HernandezOrozco2021">{{Cite journal|last=Hernández-Orozco|first=Santiago|last2=Zenil|first2=Hector|last3=Riedel|first3=Jürgen|last4=Uccello|first4=Adam|last5=Kiani|first5=Narsis A.|last6=Tegnér|first6=Jesper|date=2021|title=Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces|journal=Frontiers in Artificial Intelligence|volume=3|pages=1–20|doi=10.3389/frai.2020.567356|url=https://www.frontiersin.org/articles/10.3389/frai.2020.567356/full}}</ref> introduced an algorithmic loss function to quantify the discrepancy between predicted and observed system behavior. By combining AIT with machine learning, they developed a framework for learning generative rules in non-differentiable spaces, effectively bridging discrete algorithmic theory with continuous optimization.<ref>{{cite journal |last1=Zenil |first1=Hector |last2=Kiani |first2=Narsis A. |last3=Zea |first3=Allan A. |last4=Tegnér |first4=Jesper |title=Causal deconvolution by algorithmic generative models |journal=Nature Machine Intelligence |volume=1 |issue=1 |year=2019 |pages=58-66 |doi=10.1038/s42256-018-0005-0 }}</ref> This approach suggests an alternative path to generalization and interpretability based on algorithmic complexity rather than statistical fit alone. <ref> {{cite book | last1=Zenil | first1=Hector | last2=Kiani | first2=Narsis A. | last3=Tegner | first3=Jesper | title=Algorithmic Information Dynamics: A Computational Approach to Causality with Applications to Living Systems | publisher=Cambridge University Press | year=2023 | doi=10.1017/9781108596619 | isbn=978-1-108-59661-9 | url=https://doi.org/10.1017/9781108596619}}</ref> <ref>{{cite journal | last=Zenil | first=Hector | title=Algorithmic Information Dynamics | journal=Scholarpedia | date=25 July 2020 | volume=15 | issue=7 | doi=10.4249/scholarpedia.53143 | doi-access=free | bibcode=2020SchpJ..1553143Z | hdl=10754/666314 | hdl-access=free }}</ref>
== Model assessments ==
|