Content deleted Content added
Citation bot (talk | contribs) Added bibcode. Removed URL that duplicated identifier. Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 825/990 |
|||
(116 intermediate revisions by 27 users not shown) | |||
Line 1:
{{Orphan|date=November 2015}}
Multidimensional Digital Signal Processing (MDSP) refers to the extension of [[Digital signal processing]] (DSP) techniques to signals that vary in more than one dimension. While conventional DSP typically deals with one-dimensional data, such as time-varying [[Audio signal|audio signals]], MDSP involves processing signals in two or more dimensions. Many of the principles from one-dimensional DSP, such as [[Fourier transform|Fourier transforms]] and [[filter design]], have analogous counterparts in multidimensional signal processing.
Modern [[general-purpose computing on graphics processing units]] (GPGPUs) have an excellent throughput on vector operations and numeric manipulations through a high degree of parallel computations. Processing digital signals, particularly multidimensional signals, often involves a series of vector operations on massive numbers of independent data samples, GPGPUs are now widely employed to accelerate multidimensional DSP, such as [[image processing]], [[Video processing|video codecs]], [[Radar signal characteristics|radar signal analysis]], [[sonar signal processing]], and [[ultrasound scan]]ning. Conceptually, GPGPUs dramatically reduce the computation complexity when compared with [[central processing unit]]s (CPUs), [[digital signal processor]]s (DSPs), or other [[Field-programmable gate array|FPGA]] accelerators.
==Motivation==
Processing multidimensional signals is a common problem in scientific research and/or engineering computations. Typically, a DSP problem's computation complexity grows exponentially with the number of dimensions. Notwithstanding, with a high degree of time and storage complexity, it is extremely difficult to process multidimensional signals in real-time. Although many fast algorithms (e.g. [[Fast Fourier transform|FFT]]) have been proposed for 1-D DSP problems, they are still not efficient enough to be adapted in high dimensional DSP problems. Therefore, it is still hard to obtain the desired computation results with digital signal processors. Hence, better algorithms and hardware architecture are needed to accelerate multidimensional DSP computations.
==Existing
Practically, to accelerate multidimensional DSP, some common approaches have been proposed and developed in the past decades.
===
===
Digital signal processors are designed specifically to process vector operations. They
===Supercomputer assistance===
In order to accelerate multidimensional DSP computations, using dedicated [[
===
[[Graphics processing unit|GPUs]] are originally
==
[[File:SIMD GPGPU.jpg|alt= Figure illustrating a SIMD/vector computation unit in GPGPUs..|thumb|GPGPU/SIMD computation model]]
Modern GPU designs are mainly based on the [[Single instruction, multiple data|SIMD]] (Single Instruction Multiple Data) computation paradigm.<ref>{{cite journal|title=NVIDIA Tesla: A Unified Graphics and Computing Architecture|journal=IEEE Micro|date=2008-03-01|issn=0272-1732|pages=39–55|volume=28|issue=2|doi=10.1109/MM.2008.31|first1=E.|last1=Lindholm|first2=J.|last2=Nickolls|first3=S.|last3=Oberman|first4=J.|last4=Montrym|bibcode=2008IMicr..28b..39L |s2cid=2793450|language=en}}</ref><ref>{{cite book|title=Performance Analysis and Tuning for General Purpose Graphics Processing Units (GPGPU)|last1=Kim|first1=Hyesoon|author1-link=Hyesoon Kim|publisher=Morgan & Claypool Publishers|year=2012|isbn=978-1-60845-954-4|last2=Vuduc|first2=Richard|last3=Baghsorkhi|first3=Sara|last4=Choi|first4=Jee|last5=Hwu|first5=Wen-Mei W.|editor-last=Hill|editor-first=Mark D.|doi=10.2200/S00451ED1V01Y201209CAC020|language=en}}</ref> This type of GPU devices is so-called [[General-purpose computing on graphics processing units|general-purpose GPUs (GPGPUs)]].
GPGPUs are able to perform an operation on multiple independent data concurrently with their vector or SIMD functional units. A modern GPGPU can spawn thousands of concurrent threads and process all threads in a batch manner. With this nature, GPGPUs can be employed as DSP accelerators easily while many DSP problems can be solved by [[Divide and conquer algorithms|divide-and-conquer]] algorithms. A large scale and complex DSP problem can be divided into a bunch of small numeric problems and be processed altogether at one time so that the overall time complexity can be reduced significantly. For example, multiplying two {{math|''M'' × ''M''}} matrices can be processed by {{math|''M'' × ''M''}} concurrent threads on a GPGPU device without any output data dependency. Therefore, theoretically, by means of GPGPU acceleration, we can gain up to {{math|''M'' × ''M''}} speedup compared with a traditional CPU or digital signal processor.
==
Currently, there are several existing programming languages or interfaces which support GPGPU programming.
===
[[CUDA]] is the standard interface to program [[Nvidia|NVIDIA]] GPUs. NVIDIA also provides many CUDA libraries to support DSP acceleration on NVIDIA GPU devices.<ref>{{cite web|title=Parallel Programming and Computing Platform {{!}} CUDA {{!}} NVIDIA {{!}} NVIDIA|url=http://www.nvidia.com/object/cuda_home_new.html|website=www.nvidia.com|access-date=2015-11-05|archive-url=https://web.archive.org/web/20140106051908/http://www.nvidia.com/object/cuda_home_new.html|archive-date=2014-01-06|url-status=dead|language=en}}</ref>
==
[[OpenCL]] is an industrial standard which was originally proposed by [[Apple Inc.]] and is maintained and developed by the [[Khronos Group]] now.<ref>{{cite web|title=OpenCL – The open standard for parallel programming of heterogeneous systems|url=https://www.khronos.org/opencl/|website=www.khronos.org|date=21 July 2013|access-date=2015-11-05|language=en}}</ref> OpenCL provides [[C++]] like [[Application programming interface|APIs]] for programming different devices universally, including GPGPUs.
[[File:OpenCL execution flow rev.jpg|alt=Illustrating the execution flow of an OpenCL program/kernel|thumb|474x474px|OpenCL program execution flow]]
The following figure illustrates the execution flow of launching an OpenCL program on a GPU device. The CPU first detects OpenCL devices (GPU in this case) and then invokes a just-in-time compiler to translate the OpenCL source code into target binary. CPU then sends data to GPU to perform computations. When the GPU is processing data, CPU is free to process its own tasks.
===C++ AMP===
[[C++ AMP]] is a programming model proposed by [[Microsoft]]. C++ AMP is a [[C++]] based library designed for programming SIMD processors<ref>{{cite web|title=C++ AMP (C++ Accelerated Massive Parallelism)|url=https://msdn.microsoft.com/en-us/library/hh265137.aspx|website=msdn.microsoft.com|access-date=2015-11-05|language=en}}</ref>
===OpenACC===
[[OpenACC]] is a programming standard for [[parallel computing]] developed by [[Cray]], CAPS, [[Nvidia|NVIDIA]] and PGI.<ref>{{cite web|title=OpenACC Home {{!}} www.openacc.org|url=http://www.openacc.org/|website=www.openacc.org|access-date=2015-11-05|language=en}}</ref> OpenAcc targets programming for CPU and GPU heterogeneous systems with [[C (programming language)|C]], [[C++]], and [[Fortran]] extensions.
==Examples of GPU programming for multidimensional DSP==
==={{math|''m'' × ''m''}} matrix multiplication===
Suppose {{math|'''A'''}} and {{math|'''B'''}} are two {{math|''m'' × ''m''}} matrices and we would like to compute {{math|1 = '''C''' = '''A''' × '''B'''}}.
<math>\mathbf{A}=\begin{pmatrix}
Line 51 ⟶ 61:
\vdots & \vdots & \ddots & \vdots \\
B_{m1} & B_{m2} & \cdots & B_{mm} \\
\end{pmatrix}</math>
<math>\mathbf{C}=\mathbf{A}\times\mathbf{B}=\begin{pmatrix}
Line 58 ⟶ 68:
\vdots & \vdots & \ddots & \vdots \\
C_{m1} & C_{m2} & \cdots & C_{mm} \\
\end{pmatrix},\quad C_{ij}=\sum_{k=1}^m A_{ik}B_{kj}</math>
To compute each element in {{math|'''C'''}} takes {{math|''m''}} multiplications and {{math|(''m'' – ''1'')}} additions. Therefore, with a CPU implementation, the time complexity to achieve this computation is ''Θ(n''<sup>''3''</sup>'')'' in the following C example''.'' However, we have known that elements in {{math|'''C'''}} are independent of each other. Hence, the computation can be fully parallelized by SIMD processors, such as GPGPU devices. With a GPGPU implementation, the time complexity significantly reduces to ''Θ(n)'' by unrolling the for-loop as shown in the following OpenCL example''.''<syntaxhighlight lang="c" line="1">
// MxM matrix multiplication in C
void matrixMul(
float *A, // input matrix A
float *B, // input matrix B
float *C, // output matrix C
int size) // size of the matrices
{
// N x N x N iterations
for (int row = 0; row < size; row++) {
for (int col = 0; col < size; col++) {
int id = row * size + col;
float sum = 0.0;
for (int m = 0; m < size; m++) {
sum += (A[row * size + m] * B[m * size + col]);
}
C[id] = sum;
}
}
}
</syntaxhighlight><syntaxhighlight lang="c++" line="1">
// MxM matrix multiplication in OpenCL
__kernel void matrixMul(
__global float *A, // input matrix A
__global float *B, // input matrix B
__global float *C, // output matrix C
__global int size) // size of the matrices
{
size_t id = get_global_id(0); // each thread works on an element
size_t row = id / size;
size_t col = id % size;
float sum = 0.0;
// N iterations
for (int m = 0; m < size; m++) {
sum += (A[row * size + m] * B[m * size + col]);
}
C[id] = sum;
}
</syntaxhighlight>
===Multidimensional convolution===
Convolution is a frequently used operation in DSP. To compute the 2-D convolution of two ''m'' × ''m'' signals, it requires {{math|''m''<sup>''2''</sup>}} multiplications and {{math|''m'' × (''m'' – ''1'')}} additions for an output element. That is, the overall time complexity is ''Θ(n''<sup>''4''</sup>'')'' for the entire output signal''.'' As the following OpenCL example shows, with GPGPU acceleration, the total computation time effectively decreases to ''Θ(n''<sup>''2''</sup>'')'' since all output elements are data independent.
2-D convolution equation:
<math>y(n_1, n_2)=x(n_1,n_2)**h(n_1,n_2)=\sum_{k_1=0}^{m-1}\sum_{k_2=0}^{m-1}x(k_1, k_2)h(n_1-k_1, n_2-k_2)</math><syntaxhighlight lang="c++" line="1">
// 2-D convolution implementation in OpenCL
__kernel void convolution(
__global float *x, // input signal x
__global float *h, // filter h
__global float *y, // output signal y
__global int size) // size of ROS of the input signal and filter
{
size_t id = get_global_id(0); // each thread works on an element
size_t row = size + size - 1; // number of rows of the output signal
size_t col = size + size - 1; // number of columns of the output signal
size_t n1 = id / row;
size_t n2 = id % col;
float sum = 0.0;
// N x N iterations
for (int k1 = 0; k1 < size; k1++) {
for (int k2 = 0; k2 < size; k2++) {
sum += (x[k1 * row + k2] * h[(n1 * row - k1) + (n2 - k2)]);
}
}
C[id] = sum;
}
</syntaxhighlight>
Note that, although the example demonstrated above is a 2-D convolution, a similar approach can be adopted for a higher dimension system. Overall, for a s-D convolution, a GPGPU implementation has time complexity ''Θ(n''<sup>''s''</sup>'')'', whereas a CPU implementation has time complexity ''Θ(n''<sup>''2s''</sup>'')''.
M-D convolution equation:
<math>y(n_1,n_2,...,n_s)=x(n_1,n_2,...,n_s)**h(n_1,n_2,...,n_s)=\sum_{k_1=0}^{m_1-1}\sum_{k_2=0}^{m_2-1}...\sum_{k_s=0}^{m_s-1}x(k_1, k_2,...,k_s)h(n_1-k_1,n_2-k_2,...,n_s-k_s)</math>
===Multidimensional discrete time fourier transform (M-D DTFT)===
In addition to convolution, the [[Fourier transform|discrete-time Fourier transform (DTFT)]] is another technique which is often used in system analysis.
<math>X(\Omega_1,\Omega_2,...,\Omega_s)=\sum_{n_1=0}^{m_1-1}\sum_{n_2=0}^{m_2-1}...\sum_{n_s=0}^{m_s-1}x(n_1, n_2,...,n_s)e^{-j(\Omega_1n_1+\Omega_1n_1+...+\Omega_sn_s)}</math>
Practically, to implement an M-D DTFT, we can perform M times 1-D DFTF and matrix transpose with respect to each dimension. With a 1-D DTFT operation, GPGPU can conceptually reduce the complexity from ''Θ(n''<sup>''2''</sup>'')'' to Θ(n'')'' as illustrated by the following example of OpenCL implementation''.'' That is, an M-D DTFT the complexity of GPGPU can be computed on a GPU with a complexity of ''Θ(n''<sup>''2''</sup>'').'' While some GPGPUs are also equipped with hardware FFT accelerators internally, this implementation might be also optimized by invoking the FFT APIs or libraries provided by GPU manufacture.<ref>{{cite web|title=OpenCL™ Optimization Case Study Fast Fourier Transform – Part II – AMD|url=http://developer.amd.com/resources/documentation-articles/articles-whitepapers/opencl-optimization-case-study-fast-fourier-transform-part-ii/|website=AMD|access-date=2015-11-05|language=en-US}}</ref><syntaxhighlight lang="c++" line="1">
// DTFT in OpenCL
__kernel void convolution(
__global float *x_re,
__global float *x_im,
__global float *X_re,
__global float *X_im,
__global int size)
{
size_t id = get_global_id(0); // each thread works on an element
X_re[id] = 0.0;
X_im[id] = 0.0;
for (int i = 0; i < size; i++) {
X_re += (x_re[id] * cos(2 * 3.1415 * id / size) - x_im[id] * sin(2 * 3.1415 * id / size));
X_im += (x_re[id] * sin(2 * 3.1415 * id / size) + x_im[id] * cos(2 * 3.1415 * id / size));
}
}
</syntaxhighlight>
==Real applications==
===Digital filter design===
Designing a multidimensional digital filter is a big challenge, especially [[Infinite impulse response|IIR]] filters. Typically it relies on computers to solve difference equations and obtain a set of approximated solutions. While GPGPU computation is becoming popular, several adaptive algorithms have been proposed to design multidimensional [[Finite impulse response|FIR]] and/or [[Infinite impulse response|IIR]] filters by means of GPGPUs.<ref>{{cite book|publisher=ACM|date=2011-01-01|___location=New York, NY, USA|isbn=978-1-4503-0807-6|pages=176:1–176:12|series=SA '11|doi=10.1145/2024156.2024210|first1=Diego|last1=Nehab|first2=André|last2=Maximo|first3=Rodolfo S.|last3=Lima|first4=Hugues|last4=Hoppe|title=Proceedings of the 2011 SIGGRAPH Asia Conference|chapter=GPU-efficient recursive filtering and summed-area tables|s2cid=3014398|url=https://dl.acm.org/doi/abs/10.1145/2024156.2024210|url-access=limited|language=en}}</ref><ref>{{cite book|title=GPU Gems 2: Programming Techniques For High-Performance Graphics And General-Purpose Computation|last1=Pharr|first1=Matt|publisher=Pearson Addison Wesley|year=2005|isbn=978-0-321-33559-3|last2=Fernando|first2=Randima|language=en}}</ref><ref>{{cite book|title=GPU Computing Gems Emerald Edition|last=Hwu|first=Wen-mei W.|publisher=Morgan Kaufmann Publishers Inc.|year=2011|isbn=978-0-12-385963-1|___location=San Francisco, CA, USA|language=en}}</ref>
===Radar signal reconstruction and analysis===
Radar systems usually need to reconstruct numerous 3-D or 4-D data samples in real-time. Traditionally, particularly in military, this needs supercomputers' support. Nowadays, GPGPUs are also employed to replace supercomputers to process radar signals. For example, to process [[Synthetic aperture radar|synthetic aperture radar (SAR)]] signals, it usually involves multidimensional [[Fast Fourier transform|FFT]] computations.<ref>{{cite book|date=2009-10-01|pages=309–314|doi=10.1109/SIPS.2009.5336272|first1=C.|last1=Clemente|first2=M.|last2=Di Bisceglie|first3=M.|last3=Di Santo|first4=N.|last4=Ranaldo|first5=M.|last5=Spinelli|title=2009 IEEE Workshop on Signal Processing Systems|chapter=Processing of synthetic Aperture Radar data with GPGPU|isbn=978-1-4244-4335-2|s2cid=18932083|language=en}}</ref><ref>{{cite book|date=2009-10-01|pages=1–5|doi=10.1109/CISP.2009.5304418|first1=Bin|last1=Liu|first2=Kaizhi|last2=Wang|first3=Xingzhao|last3=Liu|first4=Wenxian|last4=Yu|title=2009 2nd International Congress on Image and Signal Processing|chapter=An Efficient SAR Processor Based on GPU via CUDA|isbn=978-1-4244-4129-7|s2cid=18801932}}</ref><ref>{{cite book|date=2014-06-01|pages=455–458|doi=10.1109/MIXDES.2014.6872240|first1=P.|last1=Monsurro|first2=A.|last2=Trifiletti|first3=F.|last3=Lannutti|title=2014 Proceedings of the 21st International Conference Mixed Design of Integrated Circuits and Systems (MIXDES)|chapter=Implementing radar algorithms on CUDA hardware|isbn=978-83-63578-05-3|s2cid=16482715}}</ref> GPGPUs can be used to rapidly perform FFT and/or iFFT in this kind of applications.
===Self-driving cars===
Many [[self-driving car]]s apply 3-D image recognition techniques to auto control the vehicles. Clearly, to accommodate the fast changing exterior environment, the recognition and decision processes must be done in real-time. GPGPUs are excellent devices to achieve the goal.<ref>{{cite book|date=2012-12-01|pages=472–481|doi=10.1109/ICPADS.2012.71|first1=Jianbin|last1=Fang|first2=A.L.|last2=Varbanescu|first3=Jie|last3=Shen|first4=H.|last4=Sips|first5=G.|last5=Saygili|first6=L.|last6=van der Maaten|title=2012 IEEE 18th International Conference on Parallel and Distributed Systems|chapter=Accelerating Cost Aggregation for Real-Time Stereo Matching|isbn=978-1-4673-4565-1|s2cid=14737126}}</ref>
===Medical image processing===
In order to have accurate diagnosis, 2-D or 3-D medical signals, such as [[ultrasound]], [[X-ray]], [[Magnetic resonance imaging|MRI]], and [[CT scan|CT]], often require very high sampling rate and image resolutions to reconstruct images. By applying GPGPUs' superior computation power, it was shown that we can acquire better-quality medical images<ref>{{cite web|title=Medical Imaging{{!}}NVIDIA|url=http://www.nvidia.com/object/medical_imaging.html|website=www.nvidia.com|access-date=2015-11-07|language=en}}</ref><ref>{{cite book|volume=5|date=2005-01-01|pages=5145–5148|doi=10.1109/IEMBS.2005.1615635|pmid=17281405|first1=Yang|last1=Heng|first2=Lixu|last2=Gu|title=2005 IEEE Engineering in Medicine and Biology 27th Annual Conference|chapter=GPU-based Volume Rendering for Medical Image Visualization|isbn=978-0-7803-8741-6|s2cid=17401263}}</ref>
==References==
{{
{{DSP}}
Line 81 ⟶ 198:
{{Parallel computing}}
[[
[[
[[
[[
|