Sonar systems are generally used underwater for range finding and detection. Active sonar emits an acoustic signal, or pulse of sound, into the water. The sound bounces off the target object and returns an “echo” to the sonar transducer. Unlike active sonar, passive sonar does not emit its own signal, which is an advantage for military vessels. But passive sonar cannot measure the range of an object unless it is used in conjunction with other passive listening devices. Multiple passive sonar devices must be used for triangulation of a sound source. No matter whether active sonar or passive sonar, the information included in the reflected signal can not be used without technical signal processing. To extract the useful information from the mixed signal, some steps are taken to transfer the raw acoustic data.
For active sonar, six steps are needed during the signal processing system.
To generate a signal pulse typical analog implementations are oscillators and voltage controlled oscillators (VCO) which are followed by modulators. Amplitude modulation is used to weight the pulse envelopes and to translate the signal spectrum up to some suitable carrier frequency for transmission.
First, in sonar system, the acoustic pressure field can be represented as [math]\displaystyle{ s(t,\vec r) }[/math]. The field function includes four variables: time [math]\displaystyle{ t }[/math] and spatial coordinate [math]\displaystyle{ \vec r=(x,y,z) }[/math]. Thus, according to the Fourier transform, in frequency domain[1]
[math]\displaystyle{ \begin{align} s(w,\vec k)&=\iiiint s(t,\vec r)\cdot e^{-j (wt-\vec k\vec r)} d\vec x \, dt,\\ \vec k &= (k_x, k_y, k_z),\\ s(t,\vec r)&=\iiiint s(w,\vec k)\cdot e^{j (wt-\vec k\vec r)} d\vec k \, dw, \end{align} }[/math]
In the formula [math]\displaystyle{ w }[/math] is temporal frequency and [math]\displaystyle{ \vec k }[/math] is spatial frequency. We often define [math]\displaystyle{ s(t,\vec r) = e^{-j (wt- \vec k\vec r)}, }[/math] as elemental signal, for the reason that any 4-D can be generated by taking a linear combination of elemental signals. Obviously, the direction of [math]\displaystyle{ \vec k }[/math] gives the direction of propagation of waves, and the speed of the waves is
[math]\displaystyle{ v = \frac{w}{|\vec k|} }[/math]
The wavelength is [math]\displaystyle{ \lambda= \frac{2\pi}{|\vec k|} }[/math]
In modern world, digital computers do contribute a lot to higher speed and efficiency in data analysis. Thus, it is necessary to convert an analog signal into a digital signal by sample the signal in time domain. The operation can be realized by three devices: a digital conversion device, a dynamic range controller and a digital conversion device.
For simplicity, the sampling is done at equal time intervals. In order to prevent the distortion (that is aliasing in frequency domain) after reconstructing the signal from sampled signal, one must sample at a faster rate.The sampling rate, which can well preserves the information content of an analog signal [math]\displaystyle{ s(t,\vec r) }[/math], is submitted to the Nyquist–Shannon sampling theorem. Assuming the sampling period is T, thus after temporal sampling, the signal is
[math]\displaystyle{ r(t)=r(nT)=s(\vec r, nT) }[/math] n is the integer.
It is really an important part for good system performance in sonar system to have appropriate sensor array and beamformer. To infer information about the acoustic field it is necessary to sample the field in space and time. Temporal sampling has already been discussed in a previous section. The sensor array samples the spatial domain, while the beamformer integrate the sensor’s output in a special way to enhance detection and estimation performance of the system. The input to the beamformer is a set of time series, while the output of the beamformer is another set of time series or a set of Fourier coefficient.
[math]\displaystyle{ r_i(t)=s(\vec x_i,t) }[/math]
[math]\displaystyle{ \vec x_i=(x_i,0,0)=(iD,0,0) }[/math]
For a desired direction [math]\displaystyle{ \vec k=\vec k_0 }[/math], set [math]\displaystyle{ t_i=\frac{\vec k_0}{w} \vec x_i }[/math].
Beamforming is one kind of filtering that can be applied to isolate signal components that are propagating in a particular direction.. In the picture is the most simple beamformer-the weighted delay-and-sum beamformer, which can be accomplished by an array of receivers or sensors. Every triangle is a sensor to sample in spatial domain. After spatial sampling, the sample signal will be weighted and the result is summing all the weighted signals. Assuming an array of M sensors distributed in space, such that the [math]\displaystyle{ i }[/math]th sensor is located at the position of [math]\displaystyle{ x_i (i=0,1,...,M-1) }[/math] and the signal received by it is denoted [math]\displaystyle{ r_i (t) }[/math].Thus after beamforming, the signal is
[math]\displaystyle{ b(t)=\frac{1}{M}\sum_{i=0}^{i=M-1}{w_i r_i(t-t_i)} }[/math]
Bandshifting is employed in active and passive sonar to reduce the complexity of the hardware and software required for subsequent processing. For example,in active sonars the received signal is contained in a very narrow band of frequencies, typically about 2 kHz, centered at some high frequency, typically about 50 kHz. To avoid having to sample the received process at the Nyquist rate of 100 kHz, it is more efficient to demodulate the process to baseband and then employ sampling of the complex envelope at only 2 kHz.
Filters and smoothers are used extensively in modem sonar systems. After sampling, the signal is converted from analog signal into a discrete time signal, thus digital filters are only into consideration. What’s more, although some filters are time varying or adaptive, most of the filters are linear shift invariant. Digital filters used in sonar signal processors perform two major functions, the filtering of waveforms to modify the frequency content and the smoothing of waveforms to reduce the effects of noise. The two generic types of digital filters are FIR and infinite impulse response (IIR) filters. Input-output relationship of an FIR filter is
[math]\displaystyle{ y(n)=\sum_{k=0}^{N-1}{h(k) x(n-k) } }[/math] (1-D)
[math]\displaystyle{ y(n1,n2)=\sum_{k_1=0}^{M_1-1}\sum_{k_2=0}^{M_2 -1}{h(k_1,k_2) x(n_1 -k_1, n_2 -k_2) } }[/math] (2-D)
Input-output relationship of an IIR filter is
[math]\displaystyle{ y(n)=\sum_{k=0}^{N-1}{a_k x(n-k) }+\sum_{k=0}^{M-1}{b_k y(n-k) } }[/math] (1-D)
[math]\displaystyle{ y(n_1,n_2)=\sum_{r_1=0}^{N_1-1}\sum_{r_2=0}^{N_2 -1}{a(r_1,r_2) x(n_1 -r_1,n_2 -r_2) }-\sum_{l_1=0}^{M_1-1}\sum_{l_2=0}^{M_2-1}{b(l_1,l_2) y(l_1,l_2) } }[/math] (2-D)
Both FIR filters and IIR filters have their advantages and disadvantages. First, the computational requirements of a sonar processor are more severe when implementing FIR filters. Second, for an IIR filter, linear phase is always difficult to obtain, so FIR filter is stable as opposed to an IIR filter. What’s more, FIR filters are more easily designed using the windowing technique.
In a word, the goal of the sonar is to extract the informations and data from acoustic space-time field, and put them into designed and prescribed process so that we can apply the different cases into one fixed pattern. Thus, to realize the goal, the final stage of sonar system consists of the following functions:
Original source: https://en.wikipedia.org/wiki/Sonar signal processing.
Read more |