Deep backward stochastic differential equation method: Difference between revisions

Content deleted Content added
Neural network architecture: move ref out of heading
Line 103:
where <math> \hat{u} </math> is the approximation of <math> u(t, X_t) </math>.
 
===Neural network architecture===
Source:<ref name="Han2018" />===
{{Artificial intelligence|Approaches}}
Deep learning encompass a class of machine learning techniques that have transformed numerous fields by enabling the modeling and interpretation of intricate data structures. These methods, often referred to as [[deep learning]], are distinguished by their hierarchical architecture comprising multiple layers of interconnected nodes, or neurons. This architecture allows deep neural networks to autonomously learn abstract representations of data, making them particularly effective in tasks such as [[image recognition]], [[natural language processing]], and [[financial modeling]]. The core of this method lies in designing an appropriate neural network structure (such as [[fully connected network|fully connected networks]] or [[recurrent neural networks]]) and selecting effective optimization algorithms.<ref name="NatureBengio">{{cite journal |last1=LeCun |first1= Yann|last2=Bengio |first2=Yoshua | last3=Hinton | first3= Geoffrey|s2cid=3074096 |year=2015 |title=Deep Learning |journal=Nature |volume=521 |issue=7553 |pages=436–444 |doi=10.1038/nature14539 |pmid=26017442|bibcode=2015Natur.521..436L |url= https://hal.science/hal-04206682/file/Lecun2015.pdf}}</ref>