Tensor: Difference between revisions

Content deleted Content added
100julian (talk | contribs)
No edit summary
Tag: Reverted
 
(27 intermediate revisions by 19 users not shown)
Line 4:
[[File:Components stress tensor.svg|right|thumb|300px|The second-order [[Cauchy stress tensor]] <math>\mathbf{T}</math> describes the stress experienced by a material at a given point. For any unit vector <math>\mathbf{v}</math>, the product <math>\mathbf{T} \cdot \mathbf{v}</math> is a vector, denoted <math>\mathbf{T}(\mathbf{v})</math>, that quantifies the force per area along the plane perpendicular to <math>\mathbf{v}</math>. This image shows, for cube faces perpendicular to <math>\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3</math>, the corresponding stress vectors <math>\mathbf{T}(\mathbf{e}_1), \mathbf{T}(\mathbf{e}_2), \mathbf{T}(\mathbf{e}_3)</math> along those faces. Because the stress tensor takes one vector as input and gives one vector as output, it is a second-order tensor.]]
 
In [[mathematics]], a '''tensor''' is an [[mathematical object|algebraic object]] that describes a [[Multilinear map|multilinear]] relationship between sets of [[algebraic structure|algebraic objects]] relatedassociated towith a [[vector space]]. Tensors may map between different objects such as [[Vector (mathematics and physics)|vectors]], [[Scalar (mathematics)|scalars]], and even other tensors. There are many types of tensors, including [[Scalar (mathematics)|scalars]] and [[Vector (mathematics and physics)|vectors]] (which are the simplest tensors), [[dual vector]]s, [[multilinear map]]s between vector spaces, and even some operations such as the [[dot product]]. Tensors are defined [[Tensor (intrinsic definition)|independent]] of any [[Basis (linear algebra)|basis]], although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional [[matrix (mathematics)|matrix]].
 
Tensors have become important in [[physics]] because they provide a concise mathematical framework for formulating and solving physics problems in areas such as [[mechanics]] ([[Stress (mechanics)|stress]], [[elasticity (physics)|elasticity]], [[quantum mechanics]], [[fluid mechanics]], [[moment of inertia]], ...), [[Classical electromagnetism|electrodynamics]] ([[electromagnetic tensor]], [[Maxwell stress tensor|Maxwell tensor]], [[permittivity]], [[magnetic susceptibility]], ...), and [[general relativity]] ([[stress–energy tensor]], [[Riemann curvature tensor|curvature tensor]], ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one ___location to another. This leads to the concept of a [[tensor field]]. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors".
Line 18:
 
=== As multidimensional arrays ===
A tensor may be represented as a (potentially multidimensional) array. Just as a [[Vector space|vector]] in an {{mvar|n}}-[[dimension (vector space)|dimensional]] space is represented by a [[multidimensional array|one-dimensional]] array with {{mvar|n}} components with respect to a given [[Basis (linear algebra)#Ordered bases and coordinates|basis]], any tensor with respect to a basis is represented by a multidimensional array. For example, a [[linear operator]] is represented in a basis as a two-dimensional square {{math|''n'' × ''n''}} array. The numbers in the multidimensional array are known as the ''components'' of the tensor. They are denoted by indices giving their position in the array, as [[subscript and superscript|subscripts and superscripts]], following the symbolic name of the tensor. For example, the components of an order -{{math|2}} tensor {{mvar|T}} could be denoted {{math|''T''<sub>''ij''</sub>}} , where {{mvar|i}} and {{mvar|j}} are indices running from {{math|1}} to {{mvar|n}}, or also by {{math|''T''{{thinsp}}{{su|lh=0.8|b=''j''|p=''i''}}}}. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while {{math|''T''<sub>''ij''</sub>}} and {{math|''T''{{thinsp}}{{su|lh=0.8|b=''j''|p=''i''}}}} can both be expressed as ''n''-by-''n'' matrices, and are numerically related via [[Raising and lowering indices|index juggling]], the difference in their transformation laws indicates it would be improper to add them together.
 
The total number of indices ({{mvar|m}}) required to identify each component uniquely is equal to the ''dimension'' or the number of ''ways'' of an array, which is why a tensor is sometimes referred to as an {{mvar|m}}-dimensional array or an {{mvar|m}}-way array. The total number of indices is also called the ''order'', ''degree'' or ''rank'' of a tensor,<ref name=DeLathauwerEtAl2000 >{{cite journal| last1= De Lathauwer |first1= Lieven| last2= De Moor |first2= Bart| last3= Vandewalle |first3= Joos| date=2000|title=A Multilinear Singular Value Decomposition |journal= [[SIAM J. Matrix Anal. Appl.]]|volume=21|issue= 4|pages=1253–1278|doi= 10.1137/S0895479896305696|s2cid= 14344372|url= https://alterlab.org/teaching/BME6780/papers+patents/De_Lathauwer_2000.pdf}}</ref><ref name=Vasilescu2002Tensorfaces >{{cite book |first1=M.A.O. |last1=Vasilescu |first2=D. |last2=Terzopoulos |title=Computer Vision — ECCV 2002 |chapter=Multilinear Analysis of Image Ensembles: TensorFaces |series=Lecture Notes in Computer Science |volume=2350 |pages=447–460 |doi=10.1007/3-540-47969-4_30 |date=2002 |isbn=978-3-540-43745-1 |s2cid=12793247 |chapter-url=http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdf |access-date=2022-12-29 |archive-date=2022-12-29 |archive-url=https://web.archive.org/web/20221229090931/http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdf |url-status=dead }}</ref><ref name=KoldaBader2009 >{{cite journal| last1= Kolda |first1= Tamara| last2= Bader |first2= Brett| date=2009|title=Tensor Decompositions and Applications |journal= [[SIAM Review]]|volume=51|issue= 3|pages=455–500|doi= 10.1137/07070111X|bibcode= 2009SIAMR..51..455K|s2cid= 16074195|url= https://www.kolda.net/publication/TensorReview.pdf}}</ref> although the term "rank" generally has [[tensor rank|another meaning]] in the context of matrices and tensors.
Line 88:
{{Main|Multilinear map}}
A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in [[differential geometry]] is to define tensors relative to a fixed (finite-dimensional) vector space ''V'', which is usually taken to be a particular vector space of some geometrical significance like the [[tangent space]] to a manifold.<ref>{{citation|last=Lee|first=John|title=Introduction to smooth manifolds|url={{google books |plainurl=y |id=4sGuQgAACAAJ|page=173}}|page=173|year=2000|publisher=Springer|isbn=978-0-387-95495-0}}</ref> In this approach, a type {{nowrap|(''p'', ''q'')}} tensor ''T'' is defined as a [[multilinear map]],
:<math> T: \underbrace{V^* \times\dots\times V^*}_{p \text{ copies}} \times \underbrace{ V \times\dots\times V}_{q \text{ copies}} \rightarrow \mathbfmathbb{R}, </math>
 
where ''V''<sup>∗</sup> is the corresponding [[dual space]] of covectors, which is linear in each of its arguments. The above assumes ''V'' is a vector space over the [[real number]]s, {{tmath|\R}}. More generally, ''V'' can be taken over any [[Field (mathematics)|field]] ''F'' (e.g. the [[complex number]]s), with ''F'' replacing {{tmath|\R}} as the codomain of the multilinear maps.
Line 97:
a {{nowrap|(''p'' + ''q'')}}-dimensional array of components can be obtained. A different choice of basis will yield different components. But, because ''T'' is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of ''T'' thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map ''T''. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.
 
In viewing a tensor as a multilinear map, it is conventional to identify the [[double dual]] ''V''<sup>∗∗</sup> of the vector space ''V'', that isi.e., the space of linear functionals on the dual vector space ''V''<sup>∗</sup>, with the vector space ''V''. There is always a [[Dual space#Injection into the double-dual|natural linear map]] from ''V'' to its double dual, given by evaluating a linear form in ''V''<sup>∗</sup> against a vector in ''V''. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify ''V'' with its double dual.
 
=== Using tensor products ===
Line 106:
:<math>T \in \underbrace{V \otimes\dots\otimes V}_{p\text{ copies}} \otimes \underbrace{V^* \otimes\dots\otimes V^*}_{q \text{ copies}}.</math>
 
A basis {{math|''v''<sub>''i''</sub>}} of {{math|''V''}} and basis {{math|''w''<sub>''j''</sub>}} of {{math|''W''}} naturally induce a basis {{math|''v''<sub>''i''</sub> ⊗ ''w''<sub>''j''</sub>}} of the tensor product {{math|''V'' ⊗ ''W''}}. The components of a tensor {{math|''T''}} are the coefficients of the tensor with respect to the basis obtained from a basis {{math|<nowiki>{</nowiki>'''e'''<sub>''i''</sub><nowiki>}</nowiki>}} for {{math|''V''}} and its dual basis {{math|{'''''ε'''''{{i sup|''j''}}<nowiki>}</nowiki>}}, that is,i.e.
:<math>T = T^{i_1\dots i_p}_{j_1\dots j_q}\; \mathbf{e}_{i_1}\otimes\cdots\otimes \mathbf{e}_{i_p}\otimes \boldsymbol{\varepsilon}^{j_1}\otimes\cdots\otimes \boldsymbol{\varepsilon}^{j_q}.</math>
 
Line 161:
|issue=7–9
|issn=0302-7597
}} From p. 498: "And if we agree to call the ''square root'' (taken with a suitable sign) of this scalar product of two conjugate polynomes, P and KP, the common TENSOR of each, ... "</ref> to describe something different from what is now meant by a tensor.<ref group=Note>Namely, the [[norm (mathematics)|norm operation]] in a vector space.</ref> Gibbs introduced [[Dyadicsdyadics]] and [[Polyadicpolyadic algebra]], which are also tensors in the modern sense.<ref name="auto">{{Cite book |last=Guo |first=Hongyu |url=https://books.google.com/books?id=5dM3EAAAQBAJ&q=array+vector+matrix+tensor |title=What Are Tensors Exactly? |date=2021-06-16 |publisher=World Scientific |isbn=978-981-12-4103-1 |language=en}}</ref> The contemporary usage was introduced by [[Woldemar Voigt]] in 1898.<ref name="Voigt1898">{{cite book|first=Woldemar |last=Voigt|title=Die fundamentalen physikalischen Eigenschaften der Krystalle in elementarer Darstellung |trans-title=The fundamental physical properties of crystals in an elementary presentation |url={{google books |plainurl=y |id=QhBDAAAAIAAJ|page=20}}|year=1898|publisher=Von Veit|pages=20–|quote= Wir wollen uns deshalb nur darauf stützen, dass Zustände der geschilderten Art bei Spannungen und Dehnungen nicht starrer Körper auftreten, und sie deshalb tensorielle, die für sie charakteristischen physikalischen Grössen aber Tensoren nennen. [We therefore want [our presentation] to be based only on [the assumption that] conditions of the type described occur during stresses and strains of non-rigid bodies, and therefore call them "tensorial" but call the characteristic physical quantities for them "tensors".]}}</ref>
 
Tensor calculus was developed around 1890 by [[Gregorio Ricci-Curbastro]] under the title ''absolute differential calculus'', and originally presented by Ricci-Curbastro in 1892.<ref>{{cite journal
|first=G. |last=Ricci Curbastro
|title=Résumé de quelques travaux sur les systèmes variables de fonctions associés à une forme différentielle quadratique
Line 174:
}}</ref> It was made accessible to many mathematicians by the publication of Ricci-Curbastro and [[Tullio Levi-Civita]]'s 1900 classic text ''Méthodes de calcul différentiel absolu et leurs applications'' (Methods of absolute differential calculus and their applications).{{sfn|Ricci|Levi-Civita|1900}} In Ricci's notation, he refers to "systems" with covariant and contravariant components, which are known as tensor fields in the modern sense.<ref name="auto"/>
 
In the 20th century, the subject came to be known as ''tensor analysis'', and achieved broader acceptance with the introduction of [[Albert Einstein|Einstein]]'s theory of [[general relativity]], around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer [[Marcel Grossmann]].<ref>{{cite book
|first=Abraham |last=Pais
|title=Subtle Is the Lord: The Science and the Life of Albert Einstein
Line 223:
! rowspan=6 | ''n''
! scope="row" | 0
| [[Scalar (mathematics)|Scalarscalar]], e.g. [[scalar curvature]]
| [[Covectorcovector]], [[linear functional]], [[1-form]], e.g. [[multipole expansion|dipole moment]], [[gradient]] of a scalar field
| [[Bilinearbilinear form]], e.g. [[inner product]], [[quadrupole moment]], [[metric tensor]], [[Ricci curvature]], [[2-form]], [[symplectic form]]
| 3-form Ee.g. [[multipole moment|octupole moment]]
|
| Ee.g. ''M''-form, that is,i.e. [[volume form]]
|
|-
! scope="row" | 1
| [[Euclidean vector]]
| [[Linearlinear transformation]],<ref name="BambergSternberg1991">{{cite book|first1=Paul|last1=Bamberg|first2=Shlomo|last2=Sternberg|title=A Course in Mathematics for Students of Physics|volume=2|year=1991|publisher=Cambridge University Press|isbn=978-0-521-40650-5|page=669}}</ref> [[Kronecker delta]]
| Ee.g. [[cross product]] in three dimensions
| Ee.g. [[Riemann curvature tensor]]
|
|
Line 241:
|-
! scope="row" | 2
| Inverse [[metric tensor]], [[bivector]], e.g., [[Poisson structure]], inverse [[metric tensor]]
|
| Ee.g. [[elasticity tensor]]
|
|
Line 259:
|-
! scope="row" | ''N''
|[[Multivectormultivector]]
|
|
Line 325:
 
== Operations ==
There are several operations on tensors that again produce a tensor. The linear nature of tensortensors implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the [[Scalar multiplication|scaling of a vector]]. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type.
 
=== Tensor product ===
{{Main|Tensor product}}
 
The [[tensor product]] takes two tensors, ''S'' and ''T'', and produces a new tensor, {{nowrap|{{math|''S'' ⊗ ''T''}}}}, whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, that isi.e.,
<math display="block">(S \otimes T)(v_1, \ldots, v_n, v_{n+1}, \ldots, v_{n+m}) = S(v_1, \ldots, v_n)T(v_{n+1}, \ldots, v_{n+m}),</math>
which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, that isi.e.,
<math display="block">
(S \otimes T)^{i_1\ldots i_l i_{l+1}\ldots i_{l+n}}_{j_1\ldots j_k j_{k+1}\ldots j_{k+m}} =
Line 388:
 
===Machine learning===
{{Main|Tensor (machine learning)}}
The properties of [[Tensor (machine learning)|tensors]], especially [[tensor decomposition]], have enabled their use in [[machine learning]] to embed higher dimensional data in [[artificial neural networks]]. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usuallythe regardedsame thing as a numericalmultidimensional quantityarray. inAbstractly, a tensor belongs to tensor product of spaces, each of which has a fixed basis, and the dimensiondimensions of the factor spaces alongcan thebe different. Thus, an example of a tensor in this context is a rectangular matrix. Just as a rectangular matrix has two axes, a horizontal and vertical axis to indicate the position of each entry, a more general tensor has as many axes as there are factors in the tensor needproduct notto bewhich it belongs, and an entry of the sametensor is referred to be a tuple of integers. The various axes have different dimensions in general.
 
== Generalizations ==
Line 450 ⟶ 451:
* {{wiktionary-inline|tensor}}
* [[Array data type]], for tensor storage and manipulation
* [[Bitensor]]
 
=== Foundational ===
Line 601 ⟶ 603:
 
[[Category:Concepts in physics]]
[[Category:Tensors|Continuum mechanics]]
[[Category:Tensors]]