Discrete dipole approximation: Difference between revisions

Content deleted Content added
No edit summary
changed on iterative methods
Line 293:
 
Note on practical implementation. In Fortran and MATLAB, arrays such as <math>\mathbf{P}(n_x,n_y,n_z,3)</math> or <math>\mathbf{E}^{\mathrm{inc}}(n_x,n_y,n_z,3)</math> are stored in column-major order, where the first index varies fastest in memory (anti-lexicographic). This means that all x-components <math>\mathbf{P}_x = \mathbf{P}(:,:,:,1)</math> are contiguous in memory, followed by all y-components <math>\mathbf{P}_y = \mathbf{P}(:,:,:,2)</math>, and then all z-components <math>\mathbf{P}_z = \mathbf{P}(:,:,:,3)</math>. In contrast, Python (NumPy) uses row-major order by default (lexicographic, last index varies fastest). To achieve the same contiguous layout of <math>\mathbf{P}_x</math>, <math>\mathbf{P}_y</math>, <math>\mathbf{P}_z</math> in memory, the array should be defined in Python as <math>\mathbf{P}(3, n_x, n_y, n_z)</math>, with the vector component index (x, y, z) first. This ensures that <math>\mathbf{P}_x = \mathbf{P}[0, :, :, :]</math> is stored contiguously in memory, followed by <math>\mathbf{P}_y</math> and <math>\mathbf{P}_z</math>.
 
 
==Conjugate gradient iteration schemes and preconditioning==
 
The solution of the linear system <math>\mathbf{A} \cdot \mathbf{P} = \mathbf{E}^{\mathrm{inc}}</math> in the DDA is typically performed using iterative methods. These methods aim to minimize the residual vector <math>\mathbf{r} = \mathbf{E}^{\mathrm{inc}} - \mathbf{A} \cdot \mathbf{P}</math> through successive approximations of the polarization vector <math>\mathbf{P}</math>. Among the earliest implementations were those based on direct matrix inversion,<ref name="purcell1973"/> as well as the use of the conjugate gradient (CG) algorithm introduced by Petravic and Kuo-Petravic.<ref name="Petravic1979"/> Subsequently, various conjugate gradient methods have been explored and improved for DDA applications.<ref name="chaumet2024"/> These methods are particularly well-suited for large systems because they require only the matrix-vector product <math>\mathbf{A} \cdot \mathbf{P}</math> and do not require storing the full matrix <math>\mathbf{A}</math> explicitly.
 
In practice, the dominant computational cost in DDA arises from the repeated evaluation of matrix-vector products during the iteration process. When the vector <math>\mathbf{P}</math> is stored in component-block form (as <math>\mathbf{P}_x</math>, <math>\mathbf{P}_y</math>, <math>\mathbf{P}_z</math>), the action of <math>\mathbf{A}</math> reduces to evaluating nine sub-products of the form <math>\mathbf{A}_{\alpha\beta} \mathbf{P}_\beta</math>, where <math>\alpha,\beta \in \{x,y,z\}</math>. These operations can be computed efficiently using convolution and FFT-based techniques when the dipole geometry is grid-based.
 
 
Line 298 ⟶ 305:
The Fast Fourier Transform (FFT) method was introduced in 1991 by Goodman, Draine, and Flatau{{r|Goodman1991}} for the discrete dipole approximation. They utilized a 3D FFT GPFA written by Clive Temperton. The interaction matrix was extended to twice its original size to incorporate negative lags by mirroring and reversing the interaction matrix. Several variants have been developed since then. Barrowes, Teixeira, and Kong{{r|Barrowes2001}} in 2001 developed a code that uses block reordering, zero padding, and a reconstruction algorithm, claiming minimal memory usage. McDonald, Golden, and Jennings{{r|mcdonald2009}} in 2009 used a 1D FFT code and extended the interaction matrix in the x, y, and z directions of the FFT calculations, suggesting memory savings due to this approach. This variant was also implemented in the [[MATLAB]] 2021 code by Shabaninezhad and Ramakrishna{{r|matlab2021}}. Other techniques to accelerate convolutions have been suggested in a general context{{r|fu2023flashfftconv}}{{r|bowman2011efficient}} along with faster evaluations of Fast Fourier Transforms arising in DDA problem solvers.
 
 
==Conjugate gradient iteration schemes and preconditioning==
Some of the early calculations of the polarization vector were based on
direct inversion<ref name="purcell1973"/> and the implementation of the conjugate gradient method by Petravic and Kuo-Petravic.<ref name="Petravic1979"/> Subsequently, many other conjugate gradient methods have been tested.<ref name="chaumet2024"/> Advances in the preconditioning of linear systems of equations arising in the DDA setup have also been reported.<ref name="chaumet2023accelerating"/>
 
==Thermal discrete dipole approximation==