Multidimensional discrete convolution: Difference between revisions

Content deleted Content added
Dtheohar (talk | contribs)
WP:FIX + general fixes, typo(s) fixed: modern day → modern-day, be be → be (4) using AWB
Line 1:
In signal processing, '''multidimensional discrete convolution''' refers to the mathematical operation between two functions ''f'' and ''g'' on an ''n''-dimensional lattice that produces a third function, also of ''n''-dimensions. Multidimensional discrete convolution is the discrete analog of the [[convolution#Domain of definition|multidimensional convolution]] of functions on Euclidean space. It is also a special case of [[convolution#Convolutions on groups|convolution on groups]] when the [[group (mathematics)|group]] is the group of ''n''-tuples of integers.
 
==Definition==
Line 15:
The resulting output region of support of a discrete multidimensional convolution will be determined based on the size and regions of support of the two input signals.[[File:Picture1_wiki.png|thumb|475px|Visualization of Convolution between two Simple Two-Dimensional Signals|none]]
 
Listed are several properties of the two-dimensional convolution operator. Note that these can also be extended for signals of <math>N</math>-dimensions.
 
'''''Commutative Property:'''''
Line 29:
<math>x**(h+g) = (x**h) = (x**g)</math>
 
These properties are seen in use in the figure below. Given some input <math>x(n_1, n_2)</math> that goes into a filter with impulse <math>h(n_1, n_2)</math> and then another filter with impulse response <math>g(n_1, n_2)</math>, the output is given by <math>y(n_1, n_2)</math>. Assume that the output of the first filter is given by <math>w(n_1, n_2)</math>, this means that:
 
<math>w = x ** h</math>
Line 44:
 
<math>h_{eq} = h**g</math>
[[File:Cascaded.png|none|thumb|272x272px|Both figures represent cascaded systems. Note that the order of the filters does not effect the output.]]
 
A similar analysis can be done on a set of parallel systems illustrated below.
 
[[File:Parallel systems.png|none|thumb|A system with a set of parallel filters.]]
 
In this case, it is clear that:
Line 66:
===Motivation & Applications===
 
Convolution in one dimension was a powerful discovery that allowed the input and output of a linear shift-invariant (LSI) system (see [[LTI system theory]]). to be easily compared so long as the impulse response of the filter system was known. This notion carries over to multidimensional convolution as well, as simply knowing the impulse response of a multidimensional filter too allows for a direct comparison to be made between the input and output of a system. This is profound since several of the signals that are transferred in the digital world today are of multiple dimensions including images and videos. Similar to the one-dimensional convolution, the multidimensional convolution allows the computation of the output of an LSI system for a given input signal.
 
For example, consider an image that is sent over some wireless network subject to electrooptical noise. Possible noise sources include errors in channel transmission, the analog to digital converter, and the image sensor. Usually noise caused by the channel or sensor creates spatially-independent, high-frequency signal components that translates to arbitrary light and dark spots on the actual image. In order to rid the image data of the high-frequency spectral content, it can be be multiplied by the frequency response of a low-pass filter, which based on the convolution theorem, is equivalent to convolving the signal in the time/spatial ___domain by the impulse response of the low-pass filter. Several impulse responses that do so are shown below.<ref>{{Cite web|title = MARBLE: Interactive Vision|url = http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/MARBLE/|website = homepages.inf.ed.ac.uk|accessdate = 2015-11-12}}</ref>
 
[[File:Screen Shot 2015-11-11 at 11.18.23 PM.png|none|thumb|311x311px|Impulse Responses of Typical Multidimensional Low Pass Filters]]
 
In addition to filtering out spectral content, the multidimensional convolution can implement edge detection and smoothing. This once again is wholly dependent on the values of the impulse response that is used to convolve with the input image. Typical impulse responses for edge detection are illustrated below.
 
[[File:Screen Shot 2015-11-11 at 11.21.00 PM.png|none|thumb|Typical Impulse Responses for Edge Detection]]
Line 134:
 
===Computational Speedup from Row-Column Decomposition===
Examine the case where an image of size <math>X\times Y</math> is being passed through a separable filter of size <math>J\times K</math>. The image itself is not separable. If the result is calculated using the direct convolution approach without exploiting the separability of the filter, this will require approximately <math>XYJK</math> multiplications and additions. If the the separability of the filter is taken into account, the filtering can be performed in two steps. The first step will have <math>XYJ</math> multiplications and additions and the second step will have <math>XYK</math>, resulting in a total of <math>XYJ+XYK</math> or <math>XY(J+K)</math> multiplications and additions.<ref>{{cite web|last1=Eddins|first1=Steve|title=Separable Convolution|url=http://blogs.mathworks.com/steve/2006/10/04/separable-convolution/|website=Mathwords|accessdate=10 November 2015}}</ref> A comparison of the computational complexity between direct and separable convolution is given in the following image:
 
[[File:Picture2 wiki.png|thumb|400px|Number of computations passing a ''10 x 10'' Image through a filter of size ''J x K'' where ''J = K'' varies in size from ''1'' to ''10''|none]]
Line 142:
 
===Convolution Theorem in Multiple Dimensions===
 
For one-dimensional signals, the [[Convolution theorem|Convolution Theorem]] states that the [[Fourier transform]] of the convolution between two signals is equal to the product of the Fourier Transforms of those two signals. Thus, convolution in the time ___domain is equal to multiplication in the frequency ___domain. Mathematically, this principle is expressed via the following:<math display="block">y(n)=h(n)*x(n)\longleftrightarrow Y(\omega)=H(\omega)X(\omega)</math>This principle is directly extendable to dealing with signals of multiple dimensions.<math display="block">y(n_1,n_2,...,n_M)=h(n_1,n_2,...,n_M)*\overset{M}{\cdots}*x(n_1,n_2,...,n_M) \longleftrightarrow Y(\omega_1,\omega_2,...,\omega_M)=H(\omega_1,\omega_2,...,\omega_M)X(\omega_1,\omega_2,...,\omega_M)</math>This property is readily extended to the usage with the [[Discrete Fourier transform]] (DFT) as follows (note that linear convolution is replaced with circular convolution where <math>\otimes</math> is used to denote the circular convolution operation of size <math>N</math>):
 
Line 173:
The result will be that <math>y_{circular}(n_1,n_2)</math> will be a spatially aliased version of the linear convolution result <math>y_{linear}(n_1,n_2)</math>. This can be expressed as the following:
 
<math>y_{circular}(n_1,n_2)=\sum_{r_1} \sum_{r_2} y_{linear}(n_1-r_1N_1, n_2-r_2N_2){\mathrm{\,\,\,for\,\,\,}}(n_1,n_2) \in R_{N_1N_2}</math>
 
Then, in order to avoid aliasing between the spatially aliased replicas, <math>N_1</math> and <math>N_2</math> must be chosen to satisfy the following conditions:
Line 183:
If these conditions are satisfied, then the results of the circular convolution will equal that of the linear convolution (taking the main period of the circular convolution as the region of support). That is:
 
<math>y_{circular}(n_1,n_2)=y_{linear}(n_1,n_2)</math> for <math>(n_1,n_2) \in R_{N_1N_2}</math>
 
===Summary of Procedure Using DFTs===
Line 191:
# Compute the DFTs of both <math>h(n_1,n_2)</math> and <math>x(n_1,n_2)</math>
# Multiple the results of the DFTs to obtain <math>Y(k_1,k_2)=H(k_1,k_2)X(k_1,k_2)</math>
# The result of the the IDFT of <math>Y(k_1,k_2)</math> will then be equal to the result of performing linear convolution on the two signals
 
==Overlap and Add==
 
Another method to perform multidimensional convolution is the '''overlap and add''' approach. This method helps reduce the computational complexity often associated with multidimensional convolutions due to the vast amounts of data inherent in modern -day digital systems.<ref>{{cite journal|last1 = Fernandez|first1 = Joseph|last2 = Kumar|first2 = Vijaya|title = Multidimensional Overlap-Add and Overlap-Save for Correlation and Convolution|journal = IEEE|date = |issue = Image Processing (ICIP)|pages = 509–513}}</ref> For sake of brevity, the two-dimensional case is used as an example, but the same concepts can be extended to multiple dimensions.
 
Consider a two-dimensional convolution using a direct computation:
Line 201:
<math>y(n_1, n_2) = \sum_{k_1=-\infty}^{\infty} \sum_{k_2=-\infty}^{\infty} x(n_1 - k_1, n_2 - k_2)h(k_1, k_2)</math>
 
Assuming that the output signal <math>y(n_1, n_2)</math> has N nonzero coefficients, and the impulse response has M nonzero samples, this direct computation would need MN multiplies and MN - 1 adds in order to compute. Using an FFT instead, the frequency response of the filter and the Fourier transform of the input would have to be stored in memory.<ref>{{Cite web|url = http://www.eeng.dcu.ie/~ee502/EE502s4.pdf|title = 2D Signal Processing|date = |accessdate = November 11, 2015|website = EE502: Digital Signal Processing|publisher = Dublin City University|last = |first = |page = 24}}</ref> Massive amounts of computations and excessive use of memory storage space pose a problematic issue as more dimensions are added. This is where the overlap and add convolution method comes in.
 
===Decomposition into Smaller Convolution Blocks===
Instead of performing convolution on the blocks of information in their entirety, the information can be broken up into smaller blocks of dimensions <math>L_1</math>x<math>L_2
</math> resulting in smaller FFTs, less computational complexity, and less storage needed. This can be expressed mathematically as follows:
 
<math>x(n_1, n_2) = \sum_{i=1}^{P_1} \sum_{j=1}^{P_2}x_{ij}(n_1, n_2)</math>
Line 237:
 
===Pictorial Method of Operation===
In order to visualize the overlap-add method more clearly, the following illustrations examine the method graphically. Assume that the input <math>x(n_1, n_2)</math> has a square region support of length N in both vertical and horizontal directions as shown in the figure below. It is then broken up into four smaller segments in such a way that it is now composed of four smaller squares. Each block of the aggregate signal has dimensions <math>(N/2)</math> <math>\times</math> <math>(N/2)</math>. [[File:X signal decomposed.png|thumb|Decomposed Input Signal|none]]Then, each component is convolved with the impulse response of the filter. Note that an advantage for an implementation such as this can be visualized here since each of these convolutions can be parallelized on a computer, as long as the computer has sufficient memory and resources to store and compute simultaneously.
 
In the figure below, the first graph on the left represents the convolution corresponding to the component of the input <math>x_{0,0}</math> with the corresponding impulse response <math>h(n_1,n_2)</math>. To the right of that, the input <math>x_{1,0}</math> is then convolved with the impulse response <math>h(n_1,n_2)</math>.
 
[[File:Conjoined blocks.jpeg|thumb|387x387px|Individual Component Convolution with Impulse Response|none]][[File:Combined convo.png|left|thumb|255x255px|Convolution of each Component with the Overlap Portions Highlighted]]The same process is done for the other two inputs respectively, and they are accumulated together in order to form the convolution. This is depicted to the left.
 
Assume that the filter impulse response <math>h(n_1,n_2)</math> has a region of support of <math>(N/8)</math> in both dimensions. This entails that each convolution convolves signals with dimensions <math>(N/2)</math> <math>\times</math> <math>(N/8)</math> in both <math>n_1
</math> and <math>n_2
</math> directions, which leads to overlap (highlighted in blue) since the length of each individual convolution is equivalent to:
Line 249:
<math>(N/2)</math> <math>+</math><math>(N/8)</math> <math>-</math><math>1</math> = <math>(5/8)N-1</math>
 
in both directions. The lighter blue portion correlates to the overlap between two adjacent convolutions, whereas the darker blue portion correlates to overlap between all four convolutions. All of these overlap portions are added together in addition to the convolutions in order to form the combined convolution <math>y(n_1,n_2)</math>.<ref>{{Cite web|url = http://www.eeng.dcu.ie/~ee502/EE502s4.pdf|title = 2D Signal Processing|date = |accessdate = November 11, 2015|website = EE502: Digital Signal Processing|publisher = Dublin City University|last = |first = |page = 26}}</ref>
 
==Overlap and Save==
The overlap and save method, just like the overlap and add method, is also used to reduce the computational complexity associated with discrete-time convolutions. This method, coupled with the FFT, allows for massive amounts of data to be filtered through a digital system while minimizing the necessary memory space used for computations on massive arrays of data.
 
===Comparison to Overlap and Add===
The overlap and save method is very similar to the overlap and add methods with a few notable exceptions. The overlap-add method involves a linear convolution of discrete-time signals, whereas the overlap-save method involves the principle of circular convolution. In addition, the overlap and save method only uses a one-time zero padding of the impulse response, while the overlap-add method involves a zero-padding for every convolution on each input component. Instead of using zero padding to prevent time-___domain aliasing like its overlap-add counterpart, overlap-save simply discards all points of aliasing, and saves the previous data in one block to be copied into the convolution for the next block.
 
In one dimension, the performance and storage metric differences between the two methods is minimal. However, in the multidimensional convolution case, the overlap-save method is preferred over the overlap-add method in terms of speed and storage abilities.<ref>{{Cite journal|url = |title = High-Speed Multidimensional Convolution|last = Kim|first = Chang|date = May 1980|journal = IEEE Transactions on Pattern Analysis and Machine Intelligence|doi = |pmid = |access-date = |last2 = Strintzis|first2 = Michael}}</ref> Just as in the overlap and add case, the procedure invokes the two-dimensional case but can easily be extended to all multidimensional procedures.
 
===Breakdown of Procedure===
Let <math>h(n_1, n_2)</math> be of size <math>M_1 \times M_2 </math>:
# Insert <math>(M_1 - 1)</math> columns and <math>(M_2 - 1)</math> rows of zeroes at the beginning of the input signal <math>x(n_1,n_2)</math> in both dimensions.
# Split the corresponding signal into overlapping segments of dimensions (<math>L_1 + M_1 - 1</math>)<math>\times</math>(<math>L_2 + M_2 - 1</math>) in which each two-dimensional block will overlap by <math>(M_1 - 1)</math> <math>\times</math> <math>(M_2 - 1)</math>.
# Zero pad <math>h(n_1, n_2)</math> such that it has dimensions (<math>L_1 + M_1 - 1</math>)<math>\times</math>(<math>L_2 + M_2 - 1</math>).
Line 269:
## Multiply to get <math>Y_{ij}(k_1, k_2) = X_{ij}(k_1, k_2)H(k_1,k_2)</math>.
## Take inverse discrete Fourier transform of <math>Y_{ij}(k_1, k_2)</math> to get <math>y_{ij}(n_1, n_2)</math>.
## Get rid of of the first <math>(M_1 - 1)</math><math>\times</math><math>(M_2 - 1)</math> for each output block <math>y_{ij}(n_1, n_2)</math>.
# Find <math>y(n_1, n_2)</math> by attaching the last <math>(L_1\times L_2)</math> samples for each output block <math>y_{ij}(n_1, n_2)</math>.<ref name=":3" />
 
==The Helix Transform==
Similar to row-column decomposition, the helix transform computes the multidimensional convolution by incorporating one-dimensional convolutional properties and operators. Instead of using the separability of signals, however, it maps the Cartesian coordinate space to a helical coordinate space allowing for a mapping from a multidimensional space to a one-dimensional space.
 
===Multidimensional Convolution with One-Dimensional Convolution Methods===
To understand the helix transform, it is useful to first understand how a multidimensional convolution can be broken down into a one-dimensional convolution. Assume that the two signals to be convolved are <math>X_{MxN}</math> and <math>Y_{K x L}</math>, which results in an output <math>Z_{(MxN-1)+(KxL-1)}</math>. This is expressed as follows:
 
<math>Z(i,j) = \sum_{m=0}^{M-1}\sum_{n=0}^{N-1}X(m,n)Y(i-m, j-n)</math>
Line 292:
\end{bmatrix}</math>
 
where each of the input matrices are now of dimensions <math>(M+K-1)</math><math>\times</math><math>(N+L-1)</math>. It is then possible to implement column-wise lexicographic ordering in order to convert the modified matrices into vectors, <math>X''</math> and <math>Y''</math>. In order to minimize the number of unimportant samples in each vector, each vector is truncated after the last sample in the original matrices <math>X</math> and <math>Y</math> respectively. Given this, the length of vector <math>X''</math> and <math>Y''</math> are given by:
 
<math>l_{X''} =</math> <math>(M+K-1)</math><math>\times</math><math>(N-1)</math> + <math>M</math>
Line 302:
<math>l_{Z''} =</math> <math>l_{Y''} +</math><math>l_{X''}</math> <math>= (M+K-1)</math><math>\times</math><math>(N+L-1)</math>
 
Interestingly, this vector length is equivalent to the dimensions of the original matrix output <math>Z</math>, making converting back to a matrix a direct transformation. Thus, the vector, <math>Z''</math>, is converted back to matrix form, which produces the output of the two-dimensional discrete convolution.<ref name=":1">{{Cite journal|url = |title = Multidimensional convolution via a 1D convolution algorithm|last = Naghizadeh|first = Mostafa|date = November 2009|journal = The Leading Edge|doi = |pmid = |access-date = |last2 = Sacchi|first2 = Mauricio}}</ref>
 
===Filtering on a Helix===
When working on a two-dimensional Cartesian mesh, a Fourier transform along either axes will result in the two-dimensional plane becoming a cylinder as the end of each column or row attaches to its respective top forming a cylinder. Filtering on a helix behaves in a similar fashion, except in this case, the bottom of each column attaches to the top of the next column, resulting in a helical mesh. This is illustrated below. The darkened tiles represent the filter coefficients.
 
[[File:Cartesian combined.jpeg|none|thumb|480x480px|Transformation from a 2D Cartesian Filtering Plane to a Helix Filter.]]If this helical structure is then sliced and unwound into a one-dimensional strip, the same filter coefficients on the 2-d Cartesian plane will match up with the same input data, resulting in an equivalent filtering scheme. This ensures that a two-dimensional convolution will be able to be performed by a one-dimensional convolution operator as the 2D filter has been unwound to a 1D filter with gaps of zeroes separating the filter coefficients.
[[File:1d strip.png|none|thumb|189x189px|One-Dimensional Filtering Strip after being Unwound.]]
Assuming that some-low pass two-dimensional filter was used, such as:
Line 313:
{| class="wikitable"
|0
| -1
|0
|-
Line 329:
<math> h(n) = -1, 0, ... , 0, -1, 4, 1, 0, ..., 0, -1, 0, ...</math>
 
Notice in the one-dimensional filter that there are no leading zeroes as illustrated in the one-dimensional filtering strip after being unwound. The entire one-dimensional strip could have been convolved with; however, it is less computationally expensive to simply ignore the leading zeroes. In addition, none of these backside zero values will need to be stored in memory, preserving precious memory resources.<ref name=":2">{{Cite journal|url = |title = Multidimensional recursive filters via a helix|last = Claerbout|first = Jon|date = September 1998|journal = Geophysics|doi = |pmid = |access-date = |page = 9}}</ref>
 
===Applications===
Helix transformations to implement recursive filters via convolution are used in various areas of signal processing. Although frequency ___domain Fourier analysis is effective when systems are stationary, with constant coefficients and periodically-sampled data, it becomes more difficult in unstable systems. The helix transform enables three-dimensional post-stack migration processes that can process data for three-dimensional variations in velocity.<ref name=":2" /> In addition, it can be applied to assist with the problem of implicit three-dimensional wavefield extrapolation.<ref>{{Cite journal|url = |title = Exploring three-dimesional implicit wavefield extrapolation with the helix transform|last = Fomel|first = Sergey|date = 1997|journal = SEP report|doi = |pmid = |access-date = |last2 = Claerbout|first2 = Jon|pages = 43–60}}</ref> Other applications include helpful algorithms in seismic data regularization, prediction error filters, and noise attenuation in geophysical digital systems.<ref name=":1" />
 
==Gaussian Convolution==
Line 347:
===Approximation by FIR Filter===
 
Gaussian convolution can be effectively approximated via implementation of a [[Finite impulse response]] (FIR) filter. The filter will be designed with truncated versions of the Gaussian. For a two-dimensional filter, the transfer function of such a filter would be defined as the following:<ref name=":0">{{cite journal|last1=Getreuer|first1=Pascal|title=A Survey of Gaussian Convolution Algorithms|journal=Image Processing On Line|date=2013|pages=286–310|url=http://dx.doi.org/10.5201/ipol.2013.87|accessdate=12 November 2015}}</ref>
 
<math>H(z_1,z_2)=\frac{1}{s(r_1,r_2)} \sum_{n_1=-r_1}^{r_1}\sum_{n_2=-r_2}^{r_2}G(n_1,n_2){z_1}^{-n_1}{z_2}^{-n_2}</math>