Bruun's FFT algorithm: Difference between revisions

Content deleted Content added
changed notation (f,j,k,n) -> (X,k,n,N) for consistency with discrete Fourier transform
CmdrObot (talk | contribs)
m sp: an real→a real; Compact wikilink; unicodify
Line 1:
'''Bruun's algorithm''' is a [[fast Fourier transform]] (FFT) algorithm based on an unusual recursive [[polynomial]]-factorization approach, proposed for powers of two by G. Bruun in 1978 and generalized to arbitrary even composite sizes by H. Murakami in 1996. Because its operations involve only real coefficients until the last computation stage, it was initially proposed as a way to efficiently compute the [[discrete Fourier transform]] (DFT) of real data. Bruun's algorithm has not seen widespread use, however, as approaches based on the ordinary [[Cooley-Tukey FFT algorithm|Cooley-Tukey FFT algorithm]] have been successfully adapted to real data with at least as much efficiency. Furthermore, there is evidence that Bruun's algorithm may be intrinsically less accurate than Cooley-Tukey in the face of finite numerical precision (Storn, 1993).
 
Nevertheless, Bruun's algorithm illustrates an alternative algorithmic framework that can express both itself and the Cooley-Tukey algorithm, and thus provides an interesting perspective on FFTs that permits mixtures of the two algorithms and other generalizations.
Line 11:
k = 0,\dots,N-1. </math>
 
For convenience, let us denote the ''N'' [[root of unity|roots of unity]] by &omega;ω<sub>''N''</sub><sup>''n''</sup> (''n''=0..''N''-1):
 
:<math>\omega_N^n = e^{-\frac{2\pi i}{N} n }</math>
Line 29:
In order to compute the DFT, we need to evaluate the remainder of ''x''(''z'') modulo ''N'' [[monomial|monomials]] as described above. Evaluating these remainders one by one is equivalent to the evaluating the usual DFT formula directly, and requires O(''N''<sup>2</sup>) operations. However, one can ''combine'' these remainders recursively to reduce the cost, using the following trick: if we want to evaluate ''x''(''z'') modulo two polynomials ''U''(''z'') and ''V''(''z''), we can first take the remainder modulo their product ''U''(''z'') ''V''(''z''), which reduces the [[degree (mathematics)|degree]] of the polynomial ''x''(''z'') and makes subsequent modulo operations less computationally expensive.
 
The product of all of the monomials (''z'' - &omega;ω<sub>''N''</sub><sup>''k''</sup>) for ''k''=0..''N''-1 is simply ''z''<sup>''N''</sup>-1 (whose roots are clearly the ''N'' roots of unity). One then wishes to find a recursive factorization of ''z''<sup>''N''</sup>-1 into polynomials of few terms and smaller and smaller degree. To compute the DFT, one takes ''x''(''z'') modulo each level of this factorization in turn, recursively, until one arrives at the monomials and the final result. If each level of the factorization splits every polynomial into an O(1) (constant-bounded) number of smaller polynomials, each with an O(1) number of nonzero coefficients, then the modulo operations for that level take O(''N'') time; since there will be a logarithmic number of levels, the overall complexity is O (''N'' log ''N'').
 
More explicitly, suppose for example that ''z''<sup>''N''</sup>-1 = ''F''<sub>1</sub>(''z'') ''F''<sub>2</sub>(''z'') ''F''<sub>3</sub>(''z''), and that ''F''<sub>''k''</sub>(''z'') = ''F''<sub>''k'',1</sub>(''z'') ''F''<sub>''k'',2</sub>(''z''), and so on. The corresponding FFT algorithm would consist of first computing ''x''<sub>''k''</sub>(''z'') = ''x''(''z'') mod
Line 39:
====Cooley-Tukey as polynomial factorization====
 
The standard decimation-in-frequency (DIF) radix-''r'' Cooley-Tukey algorithm corresponds closely to a recursive factorization. For example, radix-2 DIF Cooley-Tukey factors ''z''<sup>''N''</sup>-1 into ''F''<sub>1</sub> = (''z''<sup>''N''/2</sup>-1) and ''F''<sub>2</sub> = (''z''<sup>''N''/2</sup>+1). These modulo operations reduce the degree of ''x''(''z'') by 2, which corresponds to dividing the problem size by 2. Instead of recursively factorizing ''F''<sub>2</sub> directly, though, Cooley-Tukey instead first computes ''x''<sub>2</sub>(''z'' &omega;ω<sub>''N''</sub>), shifting all the roots (by a ''twiddle factor'') so that it can apply the recursive factorization of ''F''<sub>1</sub> to both subproblems. That is, Cooley-Tukey ensures that all subproblems are also DFTs, whereas this is not generally true for an arbitrary recursive factorization (such as Bruun's, below).
 
== The Bruun factorization ==
Line 48:
:<math>z^{4M} + az^{2M} + 1 = (z^{2M} + \sqrt{2-a}z^M+1) (z^{2M} - \sqrt{2-a}z^M + 1)</math>
 
where ''a'' is ana real constant with |''a''| &le; 2. At the end of the recursion, for ''M''=1, you are left with degree-2 polynomials that can then be evaluated modulo two roots (''z'' - &omega;ω<sub>''N''</sub><sup>''k''</sup>) for each polynomial. Thus, at each recursive stage, all of the polynomials are factorized into two parts of half the degree, each of which has at most three nonzero terms, leading to an O (''N'' log ''N'') algorithm for the FFT.
 
Moreover, since all of these polynomials have purely real coefficients (until the very last stage), they automatically exploit the special case where the inputs ''x''<sub>''n''</sub> are purely real to save roughly a factor of two in computation and storage. One can also take straightforward advantage of the case of real-symmetric data for computing the [[discrete cosine transform]] (Chen and Sorensen, 1992).
Line 54:
==== Generalization to arbitrary radices ====
 
The Bruun factorization, and thus the Bruun FFT algorithm, was generalized to handle arbitrary ''even'' composite lengths, i.e. dividing the polynomial degree by an arbitrary ''radix'' (factor), as follows. First, we define a set of polynomials &phi;φ<sub>''N'',&alpha;α</sub>(''z'') for positive integers ''N'' and for &alpha;α in <nowiki>[0,1)</nowiki> by:
 
:<math>\phi_{N, \alpha}(z) =