Noise reduction is the process of removing noise from a signal. Noise reduction techniques exist for audio and images. Noise reduction algorithms may distort the signal to some degree. Noise rejection is the ability of a circuit to isolate an undesired signal component from the desired signal component, as with common-mode rejection ratio.
All signal processing devices, both analog and digital, have traits that make them susceptible to noise. Noise can be random with an even frequency distribution (white noise), or frequency-dependent noise introduced by a device's mechanism or signal processing algorithms.
In electronic systems, a major type of noise is hiss created by random electron motion due to thermal agitation. These agitated electrons rapidly add and subtract from the output signal and thus create detectable noise.
In the case of photographic film and magnetic tape, noise (both visible and audible) is introduced due to the grain structure of the medium. In photographic film, the size of the grains in the film determines the film's sensitivity, more sensitive film having larger-sized grains. In magnetic tape, the larger the grains of the magnetic particles (usually ferric oxide or magnetite), the more prone the medium is to noise. To compensate for this, larger areas of film or magnetic tape may be used to lower the noise to an acceptable level.
Noise reduction algorithms tend to alter signals to a greater or lesser degree. The local signal-and-noise orthogonalization algorithm can be used to avoid changes to the signals.[1]
Boosting signals in seismic data is especially crucial for seismic imaging,[2][3] inversion,[4][5] and interpretation,[6] thereby greatly improving the success rate in oil & gas exploration.[7][8][9][10] The useful signal that is smeared in the ambient random noise is often neglected and thus may cause fake discontinuity of seismic events and artifacts in the final migrated image. Enhancing the useful signal while preserving edge properties of the seismic profiles by attenuating random noise can help reduce interpretation difficulties and misleading risks for oil and gas detection.
Tape hiss is a performance-limiting issue in analog tape recording. This is related to the particle size and texture used in the magnetic emulsion that is sprayed on the recording media, and also to the relative tape velocity across the tape heads.
Four types of noise reduction exist: single-ended pre-recording, single-ended hiss reduction, single-ended surface noise reduction, and codec or dual-ended systems. Single-ended pre-recording systems (such as Dolby HX Pro), work to affect the recording medium at the time of recording. Single-ended hiss reduction systems (such as DNL[11] or DNR) work to reduce noise as it occurs, including both before and after the recording process as well as for live broadcast applications. Single-ended surface noise reduction (such as CEDAR and the earlier SAE 5000A, Burwen TNE 7000, and Packburn 101/323/323A/323AA and 325[12]) is applied to the playback of phonograph records to address scratches, pops, and surface non-linearities. Single-ended dynamic range expanders like the Phase Linear Autocorrelator Noise Reduction and Dynamic Range Recovery System (Models 1000 and 4000) can reduce various noise from old recordings. Dual-ended systems (such as Dolby noise-reduction system or dbx) have a pre-emphasis process applied during recording and then a de-emphasis process applied at
Modern digital sound recordings no longer need to worry about tape hiss so analog-style noise reduction systems are not necessary. However, an interesting twist is that dither systems actually add noise to a signal to improve its quality.
Dual-ended compander noise reduction systems have a pre-emphasis process applied during recording and then a de-emphasis process applied at playback. Systems include the professional systems Dolby A[11] and Dolby SR by Dolby Laboratories, dbx Professional and dbx Type I by dbx, Donald Aldous' EMT NoiseBX,[13] Burwen Noise Eliminator (it),[14][15][16] Telefunken's telcom c4 (de)[11] and MXR Innovations' MXR[17] as well as the consumer systems Dolby NR, Dolby B,[11] Dolby C and Dolby S, dbx Type II,[11] Telefunken's High Com[11] and Nakamichi's High-Com II, Toshiba's (Aurex AD-4) Automatic Dynamic Range Expansion System|adres (ja),[11][18] JVC's Automatic Noise Reduction System|ANRS (ja)[11][18] and Super ANRS,[11][18] Fisher/Sanyo's Super D,[19][11][18] SNRS,[18] and the Hungarian/East-German Ex-Ko system.[20][18]
In some compander systems, the compression is applied during professional media production and only the expansion is applied by the listener; for example, systems like dbx disc, High-Com II, CX 20[18] and UC used for vinyl recordings and Dolby FM, High Com FM and FMX used in FM radio broadcasting.
The first widely used audio noise reduction technique was developed by Ray Dolby in 1966. Intended for professional use, Dolby Type A was an encode/decode system in which the amplitude of frequencies in four bands was increased during recording (encoding), then decreased proportionately during playback (decoding). In particular, when recording quiet parts of an audio signal, the frequencies above 1 kHz would be boosted. This had the effect of increasing the signal-to-noise ratio on tape up to 10 dB depending on the initial signal volume. When it was played back, the decoder reversed the process, in effect reducing the noise level by up to 10 dB.
The Dolby B system (developed in conjunction with Henry Kloss) was a single-band system designed for consumer products. The Dolby B system, while not as effective as Dolby A, had the advantage of remaining listenable on playback systems without a decoder.
The Telefunken High Com integrated circuit U401BR could be utilized to work as a mostly Dolby B–compatible compander as well.[21] In various late-generation High Com tape decks the Dolby-B emulating D NR Expander functionality worked not only for playback, but, as an undocumented feature, also during recording.
dbx was a competing analog noise reduction system developed by David E. Blackmer, founder of Dbx, Inc.[22] It used a root-mean-squared (RMS) encode/decode algorithm with the noise-prone high frequencies boosted, and the entire signal fed through a 2:1 compander. dbx operated across the entire audible bandwidth and unlike Dolby B was unusable without a decoder. However, it could achieve up to 30 dB of noise reduction.
Since analog video recordings use frequency modulation for the luminance part (composite video signal in direct color systems), which keeps the tape at saturation level, audio-style noise reduction is unnecessary.
Dynamic noise limiter (DNL) is an audio noise reduction system originally introduced by Philips in 1971 for use on cassette decks.[11] Its circuitry is also based on a single chip.[23][24]
It was further developed into dynamic noise reduction (DNR) by National Semiconductor to reduce noise levels on long-distance telephony.[25] First sold in 1981, DNR is frequently confused with the far more common Dolby noise-reduction system.[26]
Unlike Dolby and dbx Type I and Type II noise reduction systems, DNL and DNR are playback-only signal processing systems that do not require the source material to first be encoded. They can be used to remove background noise from any audio signal, including magnetic tape recordings and FM radio broadcasts, reducing noise by as much as 10 dB.[27] They can also be used in conjunction with other noise reduction systems, provided that they are used prior to applying DNR to prevent DNR from causing the other noise reduction system to mistrack.[28]
One of DNR's first widespread applications was in the GM Delco car stereo systems in US GM cars introduced in 1984.[29] It was also used in factory car stereos in Jeep vehicles in the 1980s, such as the Cherokee XJ. Today, DNR, DNL, and similar systems are most commonly encountered as a noise reduction system in microphone systems.[30]
A second class of algorithms work in the time-frequency domain using some linear or non-linear filters that have local characteristics and are often called time-frequency filters.[31][page needed] Noise can therefore be also removed by use of spectral editing tools, which work in this time-frequency domain, allowing local modifications without affecting nearby signal energy. This can be done manually much like in a paint program drawing pictures. Another way is to define a dynamic threshold for filtering noise, that is derived from the local signal, again with respect to a local time-frequency region. Everything below the threshold will be filtered, everything above the threshold, like partials of a voice or wanted noise, will be untouched. The region is typically defined by the location of the signal's instantaneous frequency,[32] as most of the signal energy to be preserved is concentrated about it.
Most digital audio workstations (DAWs) and audio editing software have one or more noise reduction functions.
Images taken with digital cameras or conventional film cameras will pick up noise from a variety of sources. Further use of these images will often require that the noise be reduced either for aesthetic purposes, or for practical purposes such as computer vision.
In salt and pepper noise (sparse light and dark disturbances),[33] also known as impulse noise,[34] pixels in the image are very different in color or intensity from their surrounding pixels; the defining characteristic is that the value of a noisy pixel bears no relation to the color of surrounding pixels. When viewed, the image contains dark and white dots, hence the term salt and pepper noise. Generally, this type of noise will only affect a small number of image pixels. Typical sources include flecks of dust inside the camera and overheated or faulty CCD elements.
In Gaussian noise,[35] each pixel in the image will be changed from its original value by a (usually) small amount. A histogram, a plot of the amount of distortion of a pixel value against the frequency with which it occurs, shows a normal distribution of noise. While other distributions are possible, the Gaussian (normal) distribution is usually a good model, due to the central limit theorem that says that the sum of different noises tends to approach a Gaussian distribution.
In either case, the noise at different pixels can be either correlated or uncorrelated; in many cases, noise values at different pixels are modeled as being independent and identically distributed, and hence uncorrelated.
There are many noise reduction algorithms in image processing.[36] In selecting a noise reduction algorithm, one must weigh several factors:
In real-world photographs, the highest spatial-frequency detail consists mostly of variations in brightness (luminance detail) rather than variations in hue (chroma detail). Most photographic noise reduction algorithms split the image detail into chroma and luminance components and apply more noise reduction to the former or allows the user to control chroma and luminance noise reduction separately.
One method to remove noise is by convolving the original image with a mask that represents a low-pass filter or smoothing operation. For example, the Gaussian mask comprises elements determined by a Gaussian function. This convolution brings the value of each pixel into closer harmony with the values of its neighbors. In general, a smoothing filter sets each pixel to the average value, or a weighted average, of itself and its nearby neighbors; the Gaussian filter is just one possible set of weights.
Smoothing filters tend to blur an image because pixel intensity values that are significantly higher or lower than the surrounding neighborhood smear across the area. Because of this blurring, linear filters are seldom used in practice for noise reduction;[citation needed] they are, however, often used as the basis for nonlinear noise reduction filters.
Another method for removing noise is to evolve the image under a smoothing partial differential equation similar to the heat equation, which is called anisotropic diffusion. With a spatially constant diffusion coefficient, this is equivalent to the heat equation or linear Gaussian filtering, but with a diffusion coefficient designed to detect edges, the noise can be removed without blurring the edges of the image.
Another approach for removing noise is based on non-local averaging of all the pixels in an image. In particular, the amount of weighting for a pixel is based on the degree of similarity between a small patch centered on that pixel and the small patch centered on the pixel being de-noised.
A median filter is an example of a non-linear filter and, if properly designed, is very good at preserving image detail. To run a median filter:
A median filter is a rank-selection (RS) filter, a particularly harsh member of the family of rank-conditioned rank-selection (RCRS) filters;[37] a much milder member of that family, for example one that selects the closest of the neighboring values when a pixel's value is external in its neighborhood, and leaves it unchanged otherwise, is sometimes preferred, especially in photographic applications.
Median and other RCRS filters are good at removing salt and pepper noise from an image, and also cause relatively little blurring of edges, and hence are often used in computer vision applications.
The main aim of an image denoising algorithm is to achieve both noise reduction[38] and feature preservation[39] using the wavelet filter banks.[40] In this context, wavelet-based methods are of particular interest. In the wavelet domain, the noise is uniformly spread throughout coefficients while most of the image information is concentrated in a few large ones.[41] Therefore, the first wavelet-based denoising methods were based on thresholding of detail subbands coefficients.[42][page needed] However, most of the wavelet thresholding methods suffer from the drawback that the chosen threshold may not match the specific distribution of signal and noise components at different scales and orientations.
To address these disadvantages, non-linear estimators based on Bayesian theory have been developed. In the Bayesian framework, it has been recognized that a successful denoising algorithm can achieve both noise reduction and feature preservation if it employs an accurate statistical description of the signal and noise components.[41]
Statistical methods for image denoising exist as well, though they are infrequently used as they are computationally demanding. For Gaussian noise, one can model the pixels in a greyscale image as auto-normally distributed, where each pixel's true greyscale value is normally distributed with mean equal to the average greyscale value of its neighboring pixels and a given variance.
Let [math]\displaystyle{ \delta_i }[/math] denote the pixels adjacent to the [math]\displaystyle{ i }[/math]th pixel. Then the conditional distribution of the greyscale intensity (on a [math]\displaystyle{ [0,1] }[/math] scale) at the [math]\displaystyle{ i }[/math]th node is:
[math]\displaystyle{ \mathbb{P}(x(i) = c|x(j) \forall j \in \delta i) \propto e^{-\frac{\beta}{2 \lambda} \sum_{j \in \delta i} (c - x(j))^2} }[/math]
for a chosen parameter [math]\displaystyle{ \beta \ge 0 }[/math] and variance [math]\displaystyle{ \lambda }[/math]. One method of denoising that uses the auto-normal model uses the image data as a Bayesian prior and the auto-normal density as a likelihood function, with the resulting posterior distribution offering a mean or mode as a denoised image.[43][44]
A block-matching algorithm can be applied to group similar image fragments into overlapping macroblocks of identical size, stacks of similar macroblocks are then filtered together in the transform domain and each image fragment is finally restored to its original location using a weighted average of the overlapping pixels.[45]
Shrinkage fields is a random field-based machine learning technique that brings performance comparable to that of Block-matching and 3D filtering yet requires much lower computational overhead (such that it could be performed directly within embedded systems).[46]
Various deep learning approaches have been proposed to solve noise reduction[47] and such image restoration tasks. Deep Image Prior is one such technique that makes use of convolutional neural network and is distinct in that it requires no prior training data.[48]
Most general-purpose image and photo editing software will have one or more noise-reduction functions (median, blur, despeckle, etc.).
{{Navbox |name = Video processing |title = Video processing |state = autocollapse |listclass = hlist |basestyle = text-align:center;
|group1 = Post-processing |list1 =
|group2 = Special processing |list2 =
|group3 = 24 to 30 fps conversion |list3 =
|group4 = 30 to 24 fps conversion |list4 =
}}
Original source: https://en.wikipedia.org/wiki/Noise reduction.
Read more |