Machine-learned interatomic potential: Difference between revisions

Content deleted Content added
General intro improvements
Tags: Mobile edit Mobile web edit
Short description
Tags: Mobile edit Mobile web edit
Line 1:
{{Short description|Interatomic potentials constructed by machine learning programs}}
 
'''Machine-learned interatomic potentials''' ('''MLIPs'''), or simply '''machine learning potentials''' ('''MLPs'''), are [[interatomic potential]]s constructed by [[machine learning]] programs. Beginning in the 1990s, researchers have employed such programs to construct interatomic potentials by mapping atomic structures to their potential energies. These potentials are referred to as '''MLIPs''' or '''MLPs'''.
 
Such machine learning potentials promised to fill the gap between [[density functional theory]], a highly accurate but computationally- intensive simulationmodelling programmethod, and empirically derived or intuitively-approximated potentials, which were far lighter computationally, but substantially less accurate. Improvements in [[artificial intelligence]] technology heightened the accuracy of MLPs while lowering their computational cost, increasing the role of machine learning in fitting potentials.<ref name="ML">{{cite journal|last1=Kocer|last2=Ko|last3=Behler|first1=Emir|first2=Tsz Wai|first3=Jorg|journal=Annual Review of Physical Chemistry|title=Neural Network Potentials: A Concise Overview of Methods|date=2022|volume=73|pages=163–86|doi=10.1146/annurev-physchem-082720-034254 |pmid=34982580 |bibcode=2022ARPC...73..163K |doi-access=free|arxiv=2107.03727}}</ref><ref>{{cite journal|last1=Blank|first1=TB|last2=Brown|first2=SD|last3=Calhoun|last4=Doren|first4=DJ|first3=AW|date=1995|title=Neural network models of potential energy surfaces|journal=Journal of Chemical Physics|volume=103|number=10|pages=4129–37|doi=10.1063/1.469597 |bibcode=1995JChPh.103.4129B }}</ref>
 
Machine learning potentials began by using [[Neural network (machine learning)|neural networks]] to tackle low-dimensional systems. While promising, these models could not systematically account for interatomic energy interactions; they could be applied to small molecules in a vacuum, or molecules interacting with frozen surfaces, but not much else – and even in these applications, the models often relied on force fields or potentials derived empirically or with simulations.<ref name="ML"/> These models thus remained confined to academia.
Line 9 ⟶ 11:
Almost all neural networks intake atomic coordinates and output potential energies. For some, these atomic coordinates are converted into atom-centered symmetry functions. From this data, a separate atomic neural network is trained for each element; each atomic network is evaluated whenever that element occurs in the given structure, and then the results are pooled together at the end. This process – in particular, the atom-centered symmetry functions which convey translational, rotational, and permutational invariances – has greatly improved machine learning potentials by significantly constraining the neural network search space. Other models use a similar process but emphasize bonds over atoms, using pair symmetry functions and training one network per atom pair.<ref name="ML"/><ref>{{cite journal|last1=Behler|first1=J|last2=Parrinello|first2=M|title=Generalized neural-network representation of high-dimensional potential-energy surfaces|date=2007|journal=Physical Review Letters|volume=148|issue=14|doi=10.1103/PhysRevLett.98.146401|pmid=17501293|bibcode=2007PhRvL..98n6401B}}</ref>
 
Other models prefer to learn their own descriptors rather than using predetermined symmetry-dictating functions. These models, called [[Graph neural network#Message passing layers|message-passing neural networks]] (MPNNs), are graph neural networks. Treating molecules as three-dimensional [[Graph (discrete mathematics)|graphs]] (where atoms are nodes and bonds are edges), the model takes feature vectors describing the atoms as input, and iteratively updates these vectors as information about neighboring atoms is processed through message functions and [[convolution]]s. These feature vectors are then used to predict the final potentials. The flexibility of this method often results in stronger, more generalizable models. In 2017, the first-ever MPNN model (a deep tensor neural network) was used to calculate the properties of small organic molecules. Such technology was commercialized, leading to the development of Matlantis in 2022, which extracts properties through both the forward and [[Backpropagation|backward passes]].{{Citation needed}}
 
== Gaussian Approximation Potential (GAP) ==