Content deleted Content added
edit wording |
Submitting using AfC-submit-wizard |
||
Line 1:
{{Short description|Machine learning framework}}▼
{{AfC topic|stem}}▼
{{AfC submission|||ts=20231119040152|u=Mocl125|ns=118}}
{{AFC submission|d|nn|u=Mocl125|ns=118|decliner=WikiOriginal-9|declinets=20231107021702|ts=20231019035851}} <!-- Do not remove this line! -->
Line 5 ⟶ 9:
----
▲{{Short description|Machine learning framework}}
▲{{Draft topics|stem}}
▲{{AfC topic|stem}}
'''Neural operators''' are a class of [[deep learning]] architectures designed to learn maps between infinite-dimensional [[Function space|function spaces]]. Neural operators represent an extension of traditional [[Artificial neural network|artificial neural networks]], marking a departure from the typical focus on learning mappings between finite-dimensional Euclidean spaces or finite sets. Neural operators directly learn [[Operator (mathematics)|operators]] between function spaces; they can receive input functions, and the output function can be evaluated at any discretization.<ref name="NO journal">{{cite journal |last1=Kovachki |first1=Nikola |last2=Li |first2=Zongyi |last3=Liu |first3=Burigede |last4=Azizzadenesheli |first4=Kamyar |last5=Bhattacharya |first5=Kaushik |last6=Stuart |first6=Andrew |last7=Anandkumar |first7=Anima |title=Neural operator: Learning maps between function spaces |journal=Journal of Machine Learning Research |date=2021 |volume=24 |page=1-97 |arxiv=2108.08481 |url=https://www.jmlr.org/papers/volume24/21-1524/21-1524.pdf}}</ref>
|