Array processing: Difference between revisions

Content deleted Content added
mNo edit summary
Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit
m Typo fixing, replaced: the its → its, typo(s) fixed: … → ... (3)
Line 1:
{{distinguish|Array processor|Array data structure}}
{{more footnotes|date=November 2012}}
'''Array processing''' is a wide area of research in the field of [[signal processing]] that extends from the simplest form of 1 dimensional line arrays to 2 and 3 dimensional array geometries. Array structure can be defined as a set of sensors that are spatially separated, e.g. [[antenna (radio)|radio antenna]] and [[Seismic array|seimic arrays]]. The sensors used for a specific problem may vary widely, for example [[microphone|microphones]]s, [[Accelerometer|accelerometersaccelerometer]]s and [[telescope|telescopes]]s. However, many similarities exist, the most fundamental of which may be an assumption of [[wave propagation]]. Wave propagation means there is a systemic relationship between the signal received on spatially separated sensors. By creating a physical model of the wave propagation, or in [[machine learning]] applications a [[training data]] set, the relationships between the signals received on spatially separated sensors can be leveraged for many applications.
 
Some common problem that are solved with array processing techniques are:
Line 39:
 
== General model and problem formulation==
Consider a system that consists of array of '''r''' arbitrary sensors that have arbitrary locations and arbitrary directions (directional characteristics) which receive signals that generated by '''q''' narrow band sources of known center frequency ω and locations θ1, θ2, θ3, θ4 ... θq. since the signals are narrow band the propagation delay across the array is much smaller than the reciprocal of the signal bandwidth and it follows that by using a complex envelop representation the array output can be expressed (by the sense of superposition) as :<ref name="ref2"/><ref name="ref6"/><ref name="ref5"/><br>
<math>\textstyle x(t)=\sum_{K=1}^q a(\theta_k)s_k(t)+n(t)</math>
 
Line 52:
<math>\textstyle \mathbf x(t) = A(\theta)s(t) + n(t)</math>
 
If we assume now that M snapshots are taken at time instants t1, t2 ... tM, the data can be expressed as:<br>
<math>\mathbf X = \mathbf A(\theta)\mathbf S + \mathbf N</math>
 
Line 63:
'''“The target is to estimate the DOA’s θ1, θ2, θ3, θ4 …θq of the sources from the M snapshot of the array x(t1)… x(tM). In other words what we are interested in is estimating the DOA’s of emitter signals impinging on receiving array, when given a finite data set {x(t)} observed over t=1, 2 … M. This will be done basically by using the second-order statistics of data”'''<ref name="ref6"/><ref name="ref5"/>
 
In order to solve this problem (to guarantee that there is a valid solution) do we have to add conditions or assumptions on the operational environment and\or the used model? Since there are many parameters used to specify the system like the number of sources, the number of array elements ...etc. are there conditions that should be met first? Toward this goal we want to make the following assumptions:<ref name="utexas1"/><ref name="ref2"/><ref name="ref6"/><br>
# The number of signals is known and is smaller than the number of sensors, q<r.<br>
# The set of any q steering vectors is linearly independent.<br>
# Isotropic and non-dispersive medium – Uniform propagation in all directions.<br>
# Zero mean white noise and signal, uncorrelated.<br>
# Far-Field.<br>
::a. Radius of propagation >> size of array.<br>
::b. Plane wave propagation.
 
Line 104:
<math>\textstyle where\ the\ noise\ eigenvector\ matrix\ E_{n}=[e_{d}+1, .... , e_{M}]</math>
 
MUSIC spectrum approaches use a single realization of the stochastic process that is represent by the snapshots x (t), t=1, 2 …M...M. MUSIC estimates are consistent and they converge to true source bearings as the number of snapshots grows to infinity. A basic drawback of MUSIC approach is its sensitivity to model errors. A costly procedure of calibration is required in MUSIC and it is very sensitive to errors in the calibration procedure. The cost of calibration increases as the number of parameters that define the array manifold increases.
 
=== Parametric–based solutions ===
Line 138:
|arxiv = 0809.2266 |bibcode=2008PASP..120.1207P}}</ref>
 
Correlation spectrometers like the [[Michelson interferometer]] vary the time lag between signals obtain the power spectrum of input signals. The power spectrum <math>S_{\text{XX}}(f)</math> of a signal is related to the its autocorrelation function by a Fourier transform:<ref name="Harris">[http://www.sofia.usra.edu/det_workshop/papers/session4/4-04harris_edjw021022.pdf ''Spectrometers for Heterodyne Detection''] {{webarchive |url=https://web.archive.org/web/20160307051932/http://www.sofia.usra.edu/det_workshop/papers/session4/4-04harris_edjw021022.pdf |date=March 7, 2016 }} Andrew Harris</ref>
 
{{NumBlk|:|<math>S_{\text{XX}}(f) = \int_{-\infty}^{\infty} R_{\text{XX}}(\tau) \cos(2 \pi f \tau),\mathrm{d}\tau</math>|{{EquationRef|I}}}}