Talk:Tensor
This is the talk page for discussing improvements to the Tensor article. This is not a forum for general discussion of the subject of the article. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1, 2, 3, 4, 5, 6, 7, 8Auto-archiving period: 3 months ![]() |
![]() | This ![]() It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
|
||||||||
This page has archives. Sections older than 100 days may be auto-archived by Lowercase sigmabot III if there are more than 4. |
Referring to orthonormal basis as frame
editI see my edit changing "from one orthonormal basis (called a frame) to another" to "from one orthonormal basis to another" in the Spinors section was reverted without comment by Tito Omburo. Is there any need to refer to frames in the physics sense in this section? "Frames" have an entirely distinct linear algebra meaning (and this is primarily a linear algebra article). You could make the exact same point in fewer words and without the ambiguity by just saying "basis", right? — Preceding unsigned comment added by 2600:4040:2255:4300:7426:36C9:973E:3FC5 (talk) 16:04, 9 January 2025 (UTC)
- I reverted the edit because the section still uses the word frame throughout, but without clarifying that in this section "frame" means "orthonormal basis". Tito Omburo (talk) 17:38, 9 January 2025 (UTC)
- Yes, sorry, I missed that at first. This is my first Wikipedia edit. Would it be alright with you if I took a stab at rewriting the spinor section so as not to mention frames at all? 2600:4040:2254:3D00:3119:AB26:2A63:C265 (talk) 03:54, 10 January 2025 (UTC)
- What would you call the "space of frames"? Tito Omburo (talk) 10:21, 10 January 2025 (UTC)
- The more I think about this, I'm not even sure it makes sense to have two paragraphs about spinors on this page. We're essentially saying "Here's something that would transform like a tensor *if* orientation didn't matter which it *actually* does." But if you do think the tensor page should explain the basic motivation for spinors, perhaps something like: "Physical coordinate systems are often expressed in terms of orthonormal bases. Changes of orthonormal bases obtained from rotations transform the same way as tensors under rotation; however, not all possible orthonormal bases can be reached by rotation (because the orthogonal group representing all possible rotations has two connected components). Spinors and spin groups use a double cover of the orthogonal group to allow transformations between bases that can't be considered orientation-preserving rotations." 2600:4040:2257:3600:8546:6D3:38D8:37BD (talk) 19:34, 10 January 2025 (UTC)
- Darnit, that parenthetical should say "all changes of basis", not "all rotations". The whole point is that it includes things a simple rotation won't get you to. 2600:4040:2257:3600:8546:6D3:38D8:37BD (talk) 19:40, 10 January 2025 (UTC)
- Except this explanation is wrong. Spinors exist not because the orthogonal group is not connected, but because the special orthogonal group is not simply connected. Tito Omburo (talk) 19:44, 10 January 2025 (UTC)
- I believe you're mistaken. The orthogonal group has two connected components, one corresponding to orientation-preserving rotations (matrices with determinant 1) and one corresponding to orientation-flipping transformations (matrices with determinant -1). The special orthogonal group throws out the orientation-flipping ones and is simply connected. A quick Google search should lead you to multiple proofs of this fact. I was also wondering if there is a reason to explain spinors on this page at all. 2600:4040:2258:5B00:6C83:5698:CE47:BB6 (talk) 20:19, 10 January 2025 (UTC)
- No, the special orthogonal group is not simply connected. That's in some sense the whole point of spinors. Tito Omburo (talk) 20:57, 10 January 2025 (UTC)
- Yes, that is my understanding too. The special orthogonal group SO(n) has non-trivial fundamental group. Namely, its fundamental group is the group of two elements. The spin group is the two-to-one cover of SO(n) that corresponds to this fundamental group, according to covering space theory. Mgnbar (talk) 21:03, 10 January 2025 (UTC)
- It is also true that the orthogonal group O(n) has two connected components, one of which is the special orthogonal group SO(n). There is a two-to-one covering map from O(n) to SO(n). (In fact there are infinitely many such maps. I'm not aware of a canonical one.) But that's not what spin is about. Mgnbar (talk) 21:03, 10 January 2025 (UTC)
- No, the special orthogonal group is not simply connected. That's in some sense the whole point of spinors. Tito Omburo (talk) 20:57, 10 January 2025 (UTC)
- I believe you're mistaken. The orthogonal group has two connected components, one corresponding to orientation-preserving rotations (matrices with determinant 1) and one corresponding to orientation-flipping transformations (matrices with determinant -1). The special orthogonal group throws out the orientation-flipping ones and is simply connected. A quick Google search should lead you to multiple proofs of this fact. I was also wondering if there is a reason to explain spinors on this page at all. 2600:4040:2258:5B00:6C83:5698:CE47:BB6 (talk) 20:19, 10 January 2025 (UTC)
- The more I think about this, I'm not even sure it makes sense to have two paragraphs about spinors on this page. We're essentially saying "Here's something that would transform like a tensor *if* orientation didn't matter which it *actually* does." But if you do think the tensor page should explain the basic motivation for spinors, perhaps something like: "Physical coordinate systems are often expressed in terms of orthonormal bases. Changes of orthonormal bases obtained from rotations transform the same way as tensors under rotation; however, not all possible orthonormal bases can be reached by rotation (because the orthogonal group representing all possible rotations has two connected components). Spinors and spin groups use a double cover of the orthogonal group to allow transformations between bases that can't be considered orientation-preserving rotations." 2600:4040:2257:3600:8546:6D3:38D8:37BD (talk) 19:34, 10 January 2025 (UTC)
- What would you call the "space of frames"? Tito Omburo (talk) 10:21, 10 January 2025 (UTC)
- Yes, sorry, I missed that at first. This is my first Wikipedia edit. Would it be alright with you if I took a stab at rewriting the spinor section so as not to mention frames at all? 2600:4040:2254:3D00:3119:AB26:2A63:C265 (talk) 03:54, 10 January 2025 (UTC)
- @Tito Omburo and Mgnbar:This really belongs in Tensor fields A basis pertains to a vector space, while a frame pertains to a coordinate patch in a differentiable manifold . The physics literature often refers to spinor fields as spinors, tensor fields as tensors and vector fields as vectors, but this article is supposed to be about the algebraic entities, not the corresponding fiber bundles on manifolds.. -- Shmuel (Seymour J.) Metz Username:Chatul (talk)
- Well, I've seen "frame" refer either to a basis of the tangent space at a point or a section of the frame bundle, in both mathematical and physical contexts. In particular, Cartan famously studied "moving frames", which wouldn't be a sensible combination of words unless a frame was fixed and not a field. The main thing is consistency in usage in the article, and if, instead of saying "orthonormal basis", we can get away with using the simpler term "frame", which also borrows from the physical idea "frame of reference" that we are trying to make accessible, all the better. (Incidentally, I would use the term "frame field" to refer to the field case. See, e.g. Michor Topics in differential geometry. ) Tito Omburo (talk) 12:00, 14 July 2025 (UTC)
The bulk of this article should not use the Einstein summation convention
editThe bulk of this article should not use the Einstein summation convention. The Einstein summation convention should be introduced in a special section near the bottom of the article. 134.16.68.202 (talk) 01:22, 25 January 2024 (UTC)
- I tend to agree, although I don't feel strongly. There are a few cases (general transformation rule) where explicitly showing the summations would significantly expand the length of the expression. But it would be a net benefit to novice readers. Mgnbar (talk) 21:07, 25 January 2024 (UTC)
Incorrect definition
editThe definitions in the "as multilinear maps" and "using tensor products" sections are wrong. Requiring all copies of to be the same vector space would exclude non-square matrices from being tensors (a matrix for example is a multilinear map ). 65.93.245.82 (talk) 06:09, 23 April 2024 (UTC)
- The definition is not so much incorrect as restrictive. This article views tensors in terms of a single vector space V and its dual V*. In the past, we have had discussions about whether "tensor" simply means "element of a tensor product of vector spaces", which would be broad enough to handle your objection. The consensus was that the term "tensor" is not used that broadly in the literature. Mgnbar (talk) 11:46, 23 April 2024 (UTC)
First image and caption don't match
editThe image at the top of the article and its caption are mismatched in several fairly confusing ways. First, the unit vectors e are not shown in the picture. Second, the output vector T is not shown in the picture. The reader is left trying to guess which e goes where, and why there are three output vectors when it says a second-order tensor has only one. Also, the σ coefficients aren't mentioned in the caption at all. 2600:8800:1180:25:AD40:E4C1:631E:5E07 (talk) 21:58, 25 April 2024 (UTC)
- Surely the are represented by the locations at which is evaluated, the outputs are shown in a bluish color? --JBL (talk) 23:26, 25 April 2024 (UTC)
- The anonymous poster has a point. One can guess where the are, but one should not have to. More importantly, is prominent in the figure and missing from the caption. In fact, is it true that ? Or is the figure reserving for the traction vectors that the stress tensor produces? If that's the case, then is the caption wrong? It's all a bit of a mess. Mgnbar (talk) 00:22, 26 April 2024 (UTC)
- It is helpful to compare with an earlier revision revision of the caption, although this doesn't fully resolve the difficulty. Tito Omburo (talk) 13:14, 26 April 2024 (UTC)
- It doesn't help that everything is a subscript, preventing use of the Einstein summation convention. I would guess that , the projection of on . Things would have been clearer had the caption spelled things out, used and not introduced an apparently extraneous additional name. I agree with Tito Omburo that the earlier version is clearer, although I find the nomenclature awkward. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:33, 26 April 2024 (UTC) -- Revised 19:04, 26 April 2024 (UTC)
- The current caption does describe what/where the e_i are (they are the normals to the faces of the cube). I agree about the sigmas (I mean, I know what must have been intended, but it's not good that it's entirely implicit). --JBL (talk) 18:16, 26 April 2024 (UTC)
Machine learning still unclear
edit@Tito Omburo: A recent edit, permalink/1300282879, removed a {{clarify|text=the dimension of the spaces along the different axes}}
template without actually clarifying the text in question. An axis is indexed by a scalar, e.g., real number, complex number. Changing axis to slot without defining slots leaves the dimension of the spaces belonging to the different slots
just as murky as before. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 11:57, 15 July 2025 (UTC)
- I never understood the clarification that was offered. Can we please discuss it relative to the mathematical meaning of tensor?
- Suppose that we have an n-dimensional vector space V, and we choose a basis for it. Then a linear transformation from V to itself is an element of , and it can be written as an nxn matrix M. The entries of M are denoted Mij (or maybe one of the indices is superscripted).
- Are the indices i and j indexing the "axes"/"slots", one of which is (for) and one of which is (for) ? Is that the meaning of "axis"/"slot"?
- If so, then an axis is indexed by an integer between 1 and n — not by a general real or complex number. Please help me understand. Mgnbar (talk) 12:53, 15 July 2025 (UTC)
- Actually in machine learning, a tensor is just a multidimensional array, the axes are the different dimensions of the tensor (just like a rectangular matrix has horizontal and vertical axes.) Thus a tensor belongs to where each factor is a module over a "ring" (of floating point numbers, usually), each having a fixed basis. The axes refer to the , and they generally have different dimensions. Tito Omburo (talk) 13:01, 15 July 2025 (UTC)
- I think that your description agrees with mine (where they overlap). But then I don't understand how Chatul says that the axes are indexed by real or complex numbers. Mgnbar (talk) 14:38, 15 July 2025 (UTC)
- I misread the text. Not enough coffee. :-(
- BTW, are Tito Omburo's scare quotes around ring because FP arithmetic violates the distributive and associative properies? -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:23, 15 July 2025 (UTC)
- Indeed! Tito Omburo (talk) 16:58, 15 July 2025 (UTC)
- Thanks for clarifying, both of you. Happy travels. Mgnbar (talk) 17:10, 15 July 2025 (UTC)
- Thanks everyone, from me as well. Always good to make progress! Tito Omburo (talk) 19:11, 15 July 2025 (UTC)
- Indeed! Tito Omburo (talk) 16:58, 15 July 2025 (UTC)
- I think that your description agrees with mine (where they overlap). But then I don't understand how Chatul says that the axes are indexed by real or complex numbers. Mgnbar (talk) 14:38, 15 July 2025 (UTC)