Content deleted Content added
Techguru33 (talk | contribs) Tags: Mobile edit Mobile web edit |
Citation bot (talk | contribs) Altered issue. Add: bibcode, volume, article-number. Removed URL that duplicated identifier. Removed access-date with no URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 474/967 |
||
(8 intermediate revisions by 6 users not shown) | |||
Line 171:
== SVD and spectral decomposition ==
=== Singular values, singular vectors, and their relation to the SVD ===
A non-negative real number {{tmath|\sigma}} is a '''[[singular value]]''' for {{tmath|\mathbf M}} if and only if there exist [[unit
<math display=block>\begin{align}
Line 230:
===Solving homogeneous linear equations===
A set of [[homogeneous linear equation]]s can be written as {{tmath|\mathbf A \mathbf x {{=}} \mathbf 0}} for a matrix {{tmath|\mathbf A}}, vector {{tmath|\mathbf x}}, and [[zero vector]] {{tmath|\mathbf
===Total least squares minimization===
Line 249:
where <math>\tilde{\mathbf \Sigma}</math> is the same matrix as <math>\mathbf \Sigma</math> except that it contains only the {{tmath|r}} largest singular values (the other singular values are replaced by zero). This is known as the '''[[Low-rank approximation|Eckart–Young theorem]]''', as it was proved by those two authors in 1936 (although it was later found to have been known to earlier authors; see {{harvnb|Stewart|1993}}).
=== Image compression ===
[[File:Svd compression.jpg|thumb|Singular-value decomposition (SVD) image compression of a 1996 Chevrolet Corvette photograph. The original RGB image (upper-left) is compared with rank 1, 10, and 100 reconstructions.|292x292px]]One practical consequence of the low-rank approximation given by SVD is that a [[greyscale image]] represented as an <math>m \times n</math> matrix <math>\mathbf{A}</math>, can be efficiently represented by keeping the first <math>k</math> singular values and corresponding vectors. The truncated decomposition
<math>\mathbf{A}_k = \sum_{j=1}^k \sigma_j\mathbf{u}_j \mathbf{v}_j^T </math>
gives an image with the best 2-norm error out of all rank k approximations. Thus, the task becomes finding an approximation that balances retaining perceptual fidelity with the number of vectors required to reconstruct the image. Storing <math>\mathbf{A}_k</math> requires only <math>k(n + m + 1)</math> floating-point numbers compared to <math>nm</math> integers. This same idea extends to color images by applying this operation to each channel or stacking the channels into one matrix.
Since the singular values of most natural images decay quickly, most of their variance is often captured by a small <math>k</math>. For a 1528 × 1225 greyscale image, we can achieve a relative error of <math>.7%</math> with as little as <math>k = 100</math>.<ref>{{Cite book |author1=Holmes |first=Mark |title=Introduction to Scientific Computing and Data Analysis, 2nd Ed |publisher=Springer |year=2023 |isbn=978-3-031-22429-4}}</ref> In practice, however, computing the SVD can be too computationally expensive and the resulting compression is typically less storage efficient than a specialized algorithm such as [[JPEG]].
Line 274:
===Nearest orthogonal matrix===
It is possible to use the SVD of a square matrix {{tmath|\mathbf A}} to determine the [[orthogonal matrix]] {{tmath|\mathbf
A similar problem, with interesting applications in [[shape analysis (digital geometry)|shape analysis]], is the [[orthogonal Procrustes problem]], which consists of finding an orthogonal matrix {{tmath|\mathbf
<math display=block>
\mathbf
</math>
Line 302:
===Signal processing===
The SVD and pseudoinverse have been successfully applied to [[signal processing]],<ref>{{cite journal |last=Sahidullah |first=Md. |author2=Kinnunen, Tomi |title=Local spectral variability features for speaker verification |journal=Digital Signal Processing |date=March 2016 |volume=50 |pages=1–11 |doi=10.1016/j.dsp.2015.10.011 |bibcode=2016DSP....50....1S |url= https://erepo.uef.fi/handle/123456789/4375}}<!--https://erepo.uef.fi/handle/123456789/4375--></ref> [[image processing]]<ref name="Mademlis2018">{{cite book |last1=Mademlis |first1=Ioannis |last2=Tefas |first2=Anastasios |last3=Pitas |first3=Ioannis |title=2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |chapter=Regularized SVD-Based Video Frame Saliency for Unsupervised Activity Video Summarization
{{Cite journal
| author = O. Alter, P. O. Brown and D. Botstein
Line 405:
In [[astrodynamics]], the SVD and its variants are used as an option to determine suitable maneuver directions for transfer trajectory design<ref name=muralidharan2023stretching>{{Cite journal|title=Stretching directions in cislunar space: Applications for departures and transfer design|first1=Vivek|last1=Muralidharan|first2=Kathleen|last2=Howell|journal =Astrodynamics | volume = 7 | issue = 2| pages = 153–178 | date = 2023 | doi = 10.1007/s42064-022-0147-z | bibcode = 2023AsDyn...7..153M|s2cid=252637213 }}</ref> and [[orbital station-keeping]].<ref name=Muralidharan2021>{{Cite journal|title=Leveraging stretching directions for stationkeeping in Earth-Moon halo orbits |first1=Vivek|last1=Muralidharan|first2=Kathleen|last2=Howell|journal =[[Advances in Space Research]] | volume = 69 | issue = 1| pages = 620–646 | date = 2022 | doi = 10.1016/j.asr.2021.10.028 | bibcode = 2022AdSpR..69..620M|s2cid=239490016 }}</ref>
The SVD can be used to measure the similarity between real-valued matrices.<ref name=albers2025>{{Cite journal|title=Assessing the Similarity of Real Matrices with Arbitrary Shape|first1=Jasper|last1=Albers|first2=Anno|last2=Kurth|first3=Robin|last3=Gutzen|first4=Aitor|last4=Morales-Gregorio|first5=Michael|last5=Denker|first6=Sonja|last6=Gruen|first7=Sacha|last7=van Albada|first8=Markus|last8=Diesmann|journal=PRX Life| issue =
== Proof of existence ==
Line 417:
By the [[extreme value theorem]], this continuous function attains a maximum at some {{tmath|\mathbf u}} when restricted to the unit sphere <math>\{\|\mathbf x\| = 1\}.</math> By the [[Lagrange multipliers]] theorem, {{tmath|\mathbf u}} necessarily satisfies
<math display=block>\nabla \mathbf{u}^\operatorname{T} \mathbf{M} \mathbf{u} - \lambda \cdot \nabla \mathbf{u}^\operatorname{T} \mathbf{u} = \mathbf{0}</math>
for some real number {{tmath|\lambda.}} The nabla symbol, {{tmath|\nabla}}, is the [[del]] operator (differentiation with respect to {{nobr|{{tmath|\mathbf x}}).}} Using the symmetry of {{tmath|\mathbf M}} we obtain
Line 700:
In applications that require an approximation to the [[Moore–Penrose inverse]] of the matrix {{tmath|\mathbf M,}} the smallest singular values of {{tmath|\mathbf M}} are of interest, which are more challenging to compute compared to the largest ones.
Truncated SVD is employed in [[latent semantic indexing]].<ref>{{cite journal | last1 = Chicco | first1 = D | last2 = Masseroli | first2 = M | year = 2015 | title = Software suite for gene and protein annotation prediction and similarity search | journal = IEEE/ACM Transactions on Computational Biology and Bioinformatics | volume = 12 | issue = 4 | pages = 837–843 | doi=10.1109/TCBB.2014.2382127 | pmid = 26357324 | bibcode = 2015ITCBB..12..837C | hdl = 11311/959408 | s2cid = 14714823 | url = https://doi.org/10.1109/TCBB.2014.2382127 | hdl-access = free }}
</ref>
Line 858:
* {{Citation | last1 = Banerjee | first1 = Sudipto | last2 = Roy | first2 = Anindya | date = 2014 | title = Linear Algebra and Matrix Analysis for Statistics | series = Texts in Statistical Science | publisher = Chapman and Hall/CRC | edition = 1st | isbn = 978-1420095388}}
* {{Cite book | last1=Bisgard | first1 = James | year = 2021 | title = Analysis and Linear Algebra: The Singular Value Decomposition and Applications | series = Student Mathematical Library | publisher = AMS | edition = 1st | isbn = 978-1-4704-6332-8}}
* {{cite journal | last1 = Chicco | first1 = D | last2 = Masseroli | first2 = M | year = 2015 | title = Software suite for gene and protein annotation prediction and similarity search | journal = IEEE/ACM Transactions on Computational Biology and Bioinformatics | volume = 12 | issue = 4 | pages = 837–843 | doi=10.1109/TCBB.2014.2382127 | pmid = 26357324 | bibcode = 2015ITCBB..12..837C | hdl = 11311/959408 | s2cid = 14714823 | url = https://doi.org/10.1109/TCBB.2014.2382127 | hdl-access = free }}
* {{Cite book | last2=Bau III | first2=David | last1=Trefethen | first1=Lloyd N. | author1-link = Lloyd N. Trefethen | title=Numerical linear algebra | publisher=Society for Industrial and Applied Mathematics | ___location=Philadelphia | isbn=978-0-89871-361-9 | year=1997 }}
* {{Cite journal | last1=Demmel | first1=James | author1-link = James Demmel | last2=Kahan | first2=William | author2-link=William Kahan | title=Accurate singular values of bidiagonal matrices | doi=10.1137/0911052 | year=1990 | journal= SIAM Journal on Scientific and Statistical Computing| volume=11 | issue=5 | pages=873–912 | citeseerx=10.1.1.48.3740 }}
|