Rayleigh–Ritz method

This is an old revision of this page, as edited by 24.61.185.230 (talk) at 17:01, 23 March 2021 (Algorithm description for matrices: added connection of Ritz values to the Rayleigh quotient). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalue, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz.

The name Rayleigh–Ritz is being debated[1] [2] vs. the Ritz method after Walther Ritz, since the numerical procedure has been published by Walther Ritz in 1908-1909. According to,[1] Lord Rayleigh wrote a paper congratulating Ritz on his work in 1911, but stating that he himself had used Ritz's method in many places in his book and in another publication. This statement, although later disputed, and the fact that the method in the trivial case of a single vector results in the Rayleigh quotient make the arguable misnomer persist. According to,[2] citing Richard Courant, both Lord Rayleigh and Walther Ritz independently conceived the idea of utilizing the equivalence between boundary value problems of partial differential equations on the one hand and problems of the calculus of variations on the other hand for numerical calculation of the solutions, by substituting for the variational problems simpler approximating extremum problems in which a finite number of parameters need to be determined. Ironically for the debate, the modern justification of the algorithm drops the calculus of variations in favor of the simpler and more general approach of orthogonal projection as in Galerkin method named after Boris Galerkin, thus leading also to the Ritz-Galerkin method naming.

It is used in all applications that involve approximating eigenvalues and eigenvectors, often under different names. In quantum mechanics, where a system of particles is described using a Hamiltonian, the Ritz method uses trial wave functions to approximate the ground state eigenfunction with the lowest energy. In the finite element method context, mathematically the same algorithm is commonly called the Ritz-Galerkin method. The Rayleigh–Ritz method or Ritz method terminology is typical in mechanical engineering to approximate the eigenmodes of a physical system, such as finding the resonant frequencies of a structure to guide appropriate damping.

Algorithm description for matrices

In numerical linear algebra, the Rayleigh–Ritz method is commonly[3] applied to approximate an eigenvalue problem

 

for the matrix   of size   using a projected matrix of a smaller size  , generated from a given matrix   with orthonormal columns. The matrix version of the algorithm is the most simple:

  1. Compute the   matrix  , where   denotes the complex-conjugate transpose of  
  2. Solve the eigenvalue problem  
  3. Compute the Ritz vectors   and the Ritz value  
  4. Output approximations  , called the Ritz pairs, to eigenvalues and eigenvectors of the original matrix  

If the subspace with the orthonormal basis given by the columns of the matrix   contains   vectors that are close to eigenvectors of the matrix  , the Rayleigh–Ritz method above finds   Ritz vectors that well approximate these eigenvectors. The easily computable quantity   determines the accuracy of such an approximation for every Ritz pair.

In the easiest case  , the   matrix   turns into a unit column-vector  , the   matrix   is a scalar that is equal to the Rayleigh quotient  , the only   solution to the eigenvalue problem is   and  , and the only one Ritz vector is   itself. Thus, the Rayleigh–Ritz method turns into computing of the Rayleigh quotient if  .

Another useful connection to the Rayleigh quotient is that   for every Ritz pair  , allowing to derive some properties of Ritz values   from the corresponding theory for the Rayleigh quotient. For example, if   is a Hermitian matrix, its Rayleigh quotient (and thus its every Ritz value) is real and takes values within the closed interval of the smallest and largest eigenvalues of  .

Derivation from calculus of variations

Using this technique, we approximate the variational problem and end up with a finite dimensional problem. So let us start with the problem of seeking a function   that extremizes an integral  . Assume that we are able to approximate   by a linear combination of linearly independent functions of the type,

 

where   are constants to be determined by a variational method - such as the one described below.

The selection of which approximating functions   to use is arbitrary except for the following considerations:

a) If the problem has boundary conditions such as fixed end points, then   is chosen to satisfy the problem’s boundary conditions, and all other   vanish at the boundary.

b) If the form of the solution is known, then   can be chosen so that   will have that form.

The expansion of   in terms of approximating functions replaces the variational problem of extremising the functional integral   to a problem of finding a set of constants   that extremizes  . We can now solve this by setting the partial derivatives to zero. For each value of  ,

 

The procedure is to first determine an initial estimate of   by the approximation  . Next, the approximation   is used (with   being redetermined). The process continues with   as the third approximation and so on. At each stage the following two items are true:

  1. At the   stage, the terms   are redetermined
  2. The approximation at the   stage   will be no worse than the approximation at the   stage

Convergence of the procedure means that as   tends to infinity, the approximation will tend towards the exact function   that extremizes an integral  .

In many cases one uses a complete set of functions e.g. polynomials or sines and cosines. A set of functions   is called complete over   if for each Riemann integrable function  , there is a set of values of coefficients   that reproduces  .

The above outlined procedure can be extended to cases with more than one independent variable.

Applications in mechanical engineering

The Rayleigh–Ritz method is often used in mechanical engineering for finding the approximate real resonant frequencies of multi degree of freedom systems, such as spring mass systems or flywheels on a shaft with varying cross section. It is an extension of Rayleigh's method. It can also be used for finding buckling loads and post-buckling behaviour for columns.

Consider the case whereby we want to find the resonant frequency of oscillation of a system. First, write the oscillation in the form,

 

with an unknown mode shape  . Next, find the total energy of the system, consisting of a kinetic energy term and a potential energy term. The kinetic energy term involves the square of the time derivative of   and thus gains a factor of  . Thus, we can calculate the total energy of the system and express it in the following form:

 

By conservation of energy, the average kinetic energy must be equal to the average potential energy. Thus,

 

which is also known as the Rayleigh quotient. Thus, if we knew the mode shape  , we would be able to calculate   and  , and in turn get the eigenfrequency. However, we do not yet know the mode shape. In order to find this, we can approximate   as a combination of a few approximating functions  

 

where   are constants to be determined. In general, if we choose a random set of  , it will describe a superposition of the actual eigenmodes of the system. However, if we seek   such that the eigenfrequency   is minimised, then the mode described by this set of   will be close to the lowest possible actual eigenmode of the system. Thus, this finds the lowest eigenfrequency. If we find eigenmodes orthogonal to this approximated lowest eigenmode, we can approximately find the next few eigenfrequencies as well.

In general, we can express   and   as a collection of terms quadratic in the coefficients  :

 

 

The minimization of   becomes:

 

Solving this,

 

 

 

For a non-trivial solution of c, we require determinant of the matrix coefficient of c to be zero.

 

This gives a solution for the first N eigenfrequencies and eigenmodes of the system, with N being the number of approximating functions.

Simple case of double spring-mass system

The following discussion uses the simplest case, where the system has two lumped springs and two lumped masses, and only two mode shapes are assumed. Hence M = [m1m2] and K = [k1k2].

A mode shape is assumed for the system, with two terms, one of which is weighted by a factor B, e.g. Y = [1, 1] + B[1, −1]. Simple harmonic motion theory says that the velocity at the time when deflection is zero, is the angular frequency   times the deflection (y) at time of maximum deflection. In this example the kinetic energy (KE) for each mass is   etc., and the potential energy (PE) for each spring is   etc.

We also know that without damping, the maximal KE equals the maximal PE. Thus,

 

Note that the overall amplitude of the mode shape cancels out from each side, always. That is, the actual size of the assumed deflection does not matter, just the mode shape.

Mathematical manipulations then obtain an expression for  , in terms of B, which can be differentiated with respect to B, to find the minimum, i.e. when  . This gives the value of B for which   is lowest. This is an upper bound solution for   if   is hoped to be the predicted fundamental frequency of the system because the mode shape is assumed, but we have found the lowest value of that upper bound, given our assumptions, because B is used to find the optimal 'mix' of the two assumed mode shape functions.

There are many tricks with this method, the most important is to try and choose realistic assumed mode shapes. For example, in the case of beam deflection problems it is wise to use a deformed shape that is analytically similar to the expected solution. A quartic may fit most of the easy problems of simply linked beams even if the order of the deformed solution may be lower. The springs and masses do not have to be discrete, they can be continuous (or a mixture), and this method can be easily used in a spreadsheet to find the natural frequencies of quite complex distributed systems, if you can describe the distributed KE and PE terms easily, or else break the continuous elements up into discrete parts.

This method could be used iteratively, adding additional mode shapes to the previous best solution, or you can build up a long expression with many Bs and many mode shapes, and then differentiate them partially.

See also

Notes and references

  1. ^ a b Leissa, A.W. (2005). "The historical bases of the Rayleigh and Ritz methods". Journal of Sound and Vibration. 287 (4–5): 961–978. Bibcode:2005JSV...287..961L. doi:10.1016/j.jsv.2004.12.021.
  2. ^ a b Ilanko, Sinniah (2009). "Comments on the historical bases of the Rayleigh and Ritz methods". Journal of Sound and Vibration. 319 (1–2): 731–733. doi:10.1016/j.jsv.2008.06.001.
  3. ^ Trefethen, Lloyd N.; Bau, III, David (1997). Numerical Linear Algebra. SIAM. p. 254. ISBN 978-0-89871-957-4.
Cite error: A list-defined reference named "SchofieldChelikowsky2012" is not used in the content (see the help page).