Tensor: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Add: doi-access, arxiv, pmid, bibcode, authors 1-1. Removed URL that duplicated identifier. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Jay8g | #UCB_toolbar
Tag: Reverted
 
(14 intermediate revisions by 9 users not shown)
Line 4:
[[File:Components stress tensor.svg|right|thumb|300px|The second-order [[Cauchy stress tensor]] <math>\mathbf{T}</math> describes the stress experienced by a material at a given point. For any unit vector <math>\mathbf{v}</math>, the product <math>\mathbf{T} \cdot \mathbf{v}</math> is a vector, denoted <math>\mathbf{T}(\mathbf{v})</math>, that quantifies the force per area along the plane perpendicular to <math>\mathbf{v}</math>. This image shows, for cube faces perpendicular to <math>\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3</math>, the corresponding stress vectors <math>\mathbf{T}(\mathbf{e}_1), \mathbf{T}(\mathbf{e}_2), \mathbf{T}(\mathbf{e}_3)</math> along those faces. Because the stress tensor takes one vector as input and gives one vector as output, it is a second-order tensor.]]
 
In [[mathematics]], a '''tensor''' is an [[mathematical object|algebraic object]] that describes a [[Multilinear map|multilinear]] relationship between sets of [[algebraic structure|algebraic objects]] relatedassociated towith a [[vector space]]. Tensors may map between different objects such as [[Vector (mathematics and physics)|vectors]], [[Scalar (mathematics)|scalars]], and even other tensors. There are many types of tensors, including [[Scalar (mathematics)|scalars]] and [[Vector (mathematics and physics)|vectors]] (which are the simplest tensors), [[dual vector]]s, [[multilinear map]]s between vector spaces, and even some operations such as the [[dot product]]. Tensors are defined [[Tensor (intrinsic definition)|independent]] of any [[Basis (linear algebra)|basis]], although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional [[matrix (mathematics)|matrix]].
 
Tensors have become important in [[physics]] because they provide a concise mathematical framework for formulating and solving physics problems in areas such as [[mechanics]] ([[Stress (mechanics)|stress]], [[elasticity (physics)|elasticity]], [[quantum mechanics]], [[fluid mechanics]], [[moment of inertia]], ...), [[Classical electromagnetism|electrodynamics]] ([[electromagnetic tensor]], [[Maxwell stress tensor|Maxwell tensor]], [[permittivity]], [[magnetic susceptibility]], ...), and [[general relativity]] ([[stress–energy tensor]], [[Riemann curvature tensor|curvature tensor]], ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one ___location to another. This leads to the concept of a [[tensor field]]. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors".
Line 88:
{{Main|Multilinear map}}
A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in [[differential geometry]] is to define tensors relative to a fixed (finite-dimensional) vector space ''V'', which is usually taken to be a particular vector space of some geometrical significance like the [[tangent space]] to a manifold.<ref>{{citation|last=Lee|first=John|title=Introduction to smooth manifolds|url={{google books |plainurl=y |id=4sGuQgAACAAJ|page=173}}|page=173|year=2000|publisher=Springer|isbn=978-0-387-95495-0}}</ref> In this approach, a type {{nowrap|(''p'', ''q'')}} tensor ''T'' is defined as a [[multilinear map]],
:<math> T: \underbrace{V^* \times\dots\times V^*}_{p \text{ copies}} \times \underbrace{ V \times\dots\times V}_{q \text{ copies}} \rightarrow \mathbfmathbb{R}, </math>
 
where ''V''<sup>∗</sup> is the corresponding [[dual space]] of covectors, which is linear in each of its arguments. The above assumes ''V'' is a vector space over the [[real number]]s, {{tmath|\R}}. More generally, ''V'' can be taken over any [[Field (mathematics)|field]] ''F'' (e.g. the [[complex number]]s), with ''F'' replacing {{tmath|\R}} as the codomain of the multilinear maps.
Line 223:
! rowspan=6 | ''n''
! scope="row" | 0
| [[Scalar (mathematics)|Scalarscalar]], e.g. [[scalar curvature]]
| [[Covectorcovector]], [[linear functional]], [[1-form]], e.g. [[multipole expansion|dipole moment]], [[gradient]] of a scalar field
| [[Bilinearbilinear form]], e.g. [[inner product]], [[quadrupole moment]], [[metric tensor]], [[Ricci curvature]], [[2-form]], [[symplectic form]]
| 3-form Ee.g. [[multipole moment|octupole moment]]
|
| Ee.g. ''M''-form i.e. [[volume form]]
|
|-
! scope="row" | 1
| [[Euclidean vector]]
| [[Linearlinear transformation]],<ref name="BambergSternberg1991">{{cite book|first1=Paul|last1=Bamberg|first2=Shlomo|last2=Sternberg|title=A Course in Mathematics for Students of Physics|volume=2|year=1991|publisher=Cambridge University Press|isbn=978-0-521-40650-5|page=669}}</ref> [[Kronecker delta]]
| Ee.g. [[cross product]] in three dimensions
| Ee.g. [[Riemann curvature tensor]]
|
|
Line 241:
|-
! scope="row" | 2
| Inverse [[metric tensor]], [[bivector]], e.g., [[Poisson structure]], inverse [[metric tensor]]
|
| Ee.g. [[elasticity tensor]]
|
|
Line 259:
|-
! scope="row" | ''N''
|[[Multivectormultivector]]
|
|
Line 325:
 
== Operations ==
There are several operations on tensors that again produce a tensor. The linear nature of tensortensors implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the [[Scalar multiplication|scaling of a vector]]. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type.
 
=== Tensor product ===
Line 388:
 
===Machine learning===
{{Main|Tensor (machine learning)}}
The properties of [[Tensor (machine learning)|tensors]], especially [[tensor decomposition]], have enabled their use in [[machine learning]] to embed higher dimensional data in [[artificial neural networks]]. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usually regarded as a numerical quantity in a fixed basis, and the dimension of the spaces along the different axes of the tensor need not be the same. Among the most applicable tensor decompositions are CP,<ref>{{Cite journal |last1=Zhou |first1=Mingyi |last2=Liu |first2=Yipeng |last3=Long |first3=Zhen |last4=Chen |first4=Longxi |last5=Zhu |first5=Ce |date=April 2019 |title=Tensor rank learning in CP decomposition via convolutional neural network |url=https://linkinghub.elsevier.com/retrieve/pii/S0923596518302741 |journal=Signal Processing: Image Communication |series=Tensor Image Processing |volume=73 |pages=12–21 |doi=10.1016/j.image.2018.03.017 |issn=0923-5965}}</ref> Tucker,<ref>{{Cite journal |last1=Liu |first1=Ye |last2=Ng |first2=Michael K. |date=April 2022 |title=Deep neural network compression by Tucker decomposition with nonlinear response |url=https://linkinghub.elsevier.com/retrieve/pii/S0950705122000326 |journal=Knowledge-Based Systems |volume=241 |pages=108171 |doi=10.1016/j.knosys.2022.108171 |issn=0950-7051}}</ref> Tensor-Train,<ref>{{Cite journal |last=Oseledets |first=I. V. |date=January 2011 |title=Tensor-Train Decomposition |url=http://epubs.siam.org/doi/10.1137/090752286 |journal=SIAM Journal on Scientific Computing |language=en |volume=33 |issue=5 |pages=2295–2317 |doi=10.1137/090752286 |bibcode=2011SJSC...33.2295O |issn=1064-8275}}</ref> Hierarchical Tucker,<ref>{{Cite journal |last1=Fonał |first1=Krzysztof |last2=Zdunek |first2=Rafał |date=July 2021 |title=Fast hierarchical tucker decomposition with single-mode preservation and tensor subspace analysis for feature extraction from augmented multimodal data |url=https://linkinghub.elsevier.com/retrieve/pii/S0925231221003453 |journal=Neurocomputing |volume=445 |pages=231–243 |doi=10.1016/j.neucom.2021.02.087 |issn=0925-2312}}</ref> Tensor-Ring,<ref>{{Cite journal |last1=Wang |first1=Wei |last2=Sun |first2=Guoqiang |last3=Zhao |first3=Siwen |last4=Li |first4=Yujun |last5=Zhao |first5=Jianli |date=May 2023 |title=Tensor Ring decomposition for context-aware recommendation |url=https://linkinghub.elsevier.com/retrieve/pii/S0957417423000349 |journal=Expert Systems with Applications |volume=217 |pages=119533 |doi=10.1016/j.eswa.2023.119533 |issn=0957-4174}}</ref> Block term,<ref>{{Cite journal |last1=Lai |first1=Yujing |last2=Chen |first2=Chuan |last3=Zheng |first3=Zibin |last4=Zhang |first4=Yangqing |date=September 2022 |title=Block term decomposition with distinct time granularities for temporal knowledge graph completion |url=https://linkinghub.elsevier.com/retrieve/pii/S0957417422004511 |journal=Expert Systems with Applications |volume=201 |pages=117036 |doi=10.1016/j.eswa.2022.117036 |issn=0957-4174}}</ref> ADATucker.<ref>{{Cite journal |last1=Zhong |first1=Zhisheng |last2=Wei |first2=Fangyin |last3=Lin |first3=Zhouchen |last4=Zhang |first4=Chao |date=February 2019 |title=ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition |url=https://linkinghub.elsevier.com/retrieve/pii/S0893608018303010 |journal=Neural Networks |volume=110 |pages=104–115 |doi=10.1016/j.neunet.2018.10.016 |pmid=30508807 |arxiv=1906.07671 |issn=0893-6080}}</ref> All these tensor decomposition apporaches find application in convolutional neural networks.<ref>{{Cite journal |last1=Abdulkadirov |first1=Ruslan |last2=Lyakhov |first2=Pavel |last3=Butusov |first3=Denis |last4=Nagornov |first4=Nikolay |last5=Reznikov |first5=Dmitry |last6=Bobrov |first6=Anatoly |last7=Kalita |first7=Diana |date=March 2025 |title=Enhancing Unmanned Aerial Vehicle Object Detection via Tensor Decompositions and Positive–Negative Momentum Optimizers |journal=Mathematics |language=en |volume=13 |issue=5 |pages=828 |doi=10.3390/math13050828 |doi-access=free |issn=2227-7390}}</ref> Besides the ususal tensor product, many researchers develope machine learning models, containing t-prodcut.
The properties of tensors, especially [[tensor decomposition]], have enabled their use in [[machine learning]] to embed higher dimensional data in [[artificial neural networks]]. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is the same thing as a multidimensional array. Abstractly, a tensor belongs to tensor product of spaces, each of which has a fixed basis, and the dimensions of the factor spaces can be different. Thus, an example of a tensor in this context is a rectangular matrix. Just as a rectangular matrix has two axes, a horizontal and vertical axis to indicate the position of each entry, a more general tensor has as many axes as there are factors in the tensor product to which it belongs, and an entry of the tensor is referred to be a tuple of integers. The various axes have different dimensions in general.
 
== Generalizations ==