Discrete dipole approximation: Difference between revisions

Content deleted Content added
dimensions of G for clarity
No edit summary
Line 369:
 
 
==Fast Fourier Transform for fast convolution calculations==
The Fast Fourier Transform (FFT) method was introduced in 1991 by Goodman, Draine, and Flatau{{r|Goodman1991}} for the discrete dipole approximation. They utilized a 3D FFT GPFA written by Clive Temperton. The interaction matrix was extended to twice its original size to incorporate negative lags by mirroring and reversing the interaction matrix. Several variants have been developed since then. Barrowes, Teixeira, and Kong{{r|Barrowes2001}} in 2001 developed a code that uses block reordering, zero padding, and a reconstruction algorithm, claiming minimal memory usage. McDonald, Golden, and Jennings{{r|mcdonald2009}} in 2009 used a 1D FFT code and extended the interaction matrix in the x, y, and z directions of the FFT calculations, suggesting memory savings due to this approach. This variant was also implemented in the [[MATLAB]] 2021 code by Shabaninezhad and Ramakrishna{{r|matlab2021}}. Other techniques to accelerate convolutions have been suggested in a general context{{r|fu2023flashfftconv}}{{r|bowman2011efficient}} along with faster evaluations of Fast Fourier Transforms arising in DDA problem solvers.
 
==Fast Fourier Transform for fast convolution calculations==
The use of the Fast Fourier Transform (FFT) to accelerate convolution operations in the discrete dipole approximation (DDA) was introduced by Goodman, Draine, and Flatau in 1991{{r|Goodman1991}}. Their approach utilized a 3D FFT algorithm (GPFA) developed by Clive Temperton{{r|Temperton1983}}, and involved extending the interaction matrix to twice its original dimensions. This extension was accomplished by flipping and mirroring the Green’s function blocks to incorporate negative lags, allowing FFT-based convolution. The technique of sign-flipping and block extension became a foundational step in efficient implementations of DDA. A similar variant was adopted in the 2021 MATLAB implementation by Shabaninezhad and Ramakrishna{{r|matlab2021}}.
Several variants have been proposed since then. In 2001, Barrowes, Teixeira, and Kong{{r|Barrowes2001}} introduced a method based on block reordering, zero padding, and a reconstruction algorithm to minimize memory requirements. In 2009, McDonald, Golden, and Jennings{{r|mcdonald2009}} proposed a different scheme utilizing sequences of 1D FFTs, extending the interaction matrix separately in the x, y, and z directions. They argued that their approach leads to reduced memory consumption.
More generally, advanced FFT-based convolution methods have been developed in the machine learning and numerical analysis communities, offering potential benefits for DDA solvers as well. These include FlashFFTConv{{r|fu2023flashfftconv}} and frequency-___domain low-rank techniques{{r|bowman2011efficient}} that aim to reduce the computational burden of large-scale convolutions.
 
 
Line 446 ⟶ 448:
 
<ref name="Moncada-Villa2022">{{Cite journal | last1 = Moncada-Villa | first1 = E. | last2 = Cuevas | first2 = J. C. | year = 2022 | title = Thermal discrete dipole approximation for near-field radiative heat transfer in many-body systems with arbitrary nonreciprocal bodies | journal = Physical Review B | volume = 106 | issue = 23 | pages = 235430 | doi = 10.1103/PhysRevB.106.235430 | arxiv = 2206.14921 | bibcode = 2022PhRvB.106w5430M }}</ref>
 
<ref name="Temperton1983">C. Temperton. "Self-sorting mixed-radix fast Fourier transforms." Journal of Computational Physics, 52.1 (1983): 1–23.</ref>
 
}}