Multidimensional discrete convolution: Difference between revisions

Content deleted Content added
Milatur (talk | contribs)
Row-Column Decomposition: Clarify math to show where the efficiency saving comes from
Line 119:
The row-column method can be applied when one of the signals in the convolution is separable. The method exploits the properties of separability in order to achieve a method of calculating the convolution of two multidimensional signals that is more computationally efficient than direct computation of each sample (given that one of the signals are separable).<ref>{{cite book|last1=Sihvo|first1=Tero|title=International Symposium on Signals, Circuits and Systems, 2005. ISSCS 2005|volume=1|pages=99–102|last2=Niittylahti|first2=Jarkko|chapter=Row-Column Decomposition Based 2D Transform Optimization on Subword Parallel Processors|date=5 June 2005|doi=10.1109/ISSCS.2005.1509860|isbn=978-0-7803-9029-4}}</ref> The following shows the mathematical reasoning behind the row-column decomposition approach (typically <math>h(n_1,n_2)</math> is the separable signal):
 
<math>
<math>y(n_1,n_2)=\sum_{k_1=-\infty}^{\infty} \sum_{k_2=-\infty}^{\infty} h(k_1,k_2)x(n_1-k_1,n_2-k_2)</math>
\begin{align}
<math>y(n_1,n_2)&=\sum_{k_1=-\infty}^{\infty} \sum_{k_2=-\infty}^{\infty} h(k_1,k_2)x(n_1-k_1,n_2-k_2)</math>\\
 
<math>y(n_1,n_2)&=\sum_{k_1=-\infty}^{\infty} \sum_{k_2=-\infty}^{\infty} h_1(k_1)h_2(k_2)x(n_1-k_1,n_2-k_2)</math>\\
 
<math>y(n_1,n_2)&=\sum_{k_1=-\infty}^{\infty}h_1(k_1)\Bigg[ \sum_{k_2=-\infty}^{\infty} h_2(k_2)x(n_1-k_1,n_2-k_2)\Bigg]</math>
\end{align}
</math>
 
The values of <math>\sum_{k_2=-\infty}^{\infty} h_2(k_2)x(n_1-k_1,n_2-k_2)</math> can now be re-used when evaluating other <math>y</math> values with a shared value of <math>n_2</math>:
 
<math>
\begin{align}
y(n_1+\delta,n_2)&=\sum_{k_1=-\infty}^{\infty}h_1(k_1)\Bigg[ \sum_{k_2=-\infty}^{\infty} h_2(k_2)x(n_1-[k_1-\delta],n_2-k_2)\Bigg]\\
 
&=\sum_{k_1=-\infty}^{\infty}h_1(k_1+\delta)\Bigg[ \sum_{k_2=-\infty}^{\infty} h_2(k_2)x(n_1-k_1,n_2-k_2)\Bigg]
\end{align}
</math>
 
Thus, the resulting convolution can be effectively calculated by first performing the convolution operation on all of the rows of <math>x(n_1,n_2)</math>, and then on all of its columns. This approach can be further optimized by taking into account how memory is accessed within a computer processor.