Acoustic holography is a technique that allows three-dimensional distributions of sound waves called sound fields to be stored and reconstructed. To do this, sound passing through a surface is recorded as a two-dimensional pattern called a hologram (a type of interferogram). The hologram contains information about the phase and amplitude of the sound waves passing though. This pattern can be used to reconstruct the entire three-dimensional sound field. Acoustic holography is similar in principle to optical holography.[1]
There are two distinct forms of acoustic holography: farfield acoustical holography (FAH) and nearfield acoustical holography (NAH).[2][3] The distinction lies in the distance of the sound source to the hologram, which impacts the resolution of the reconstructed sound field.[1]
The hologram is made by measuring acoustic pressure away from the source using an array of transducers (microphones) or a single scanning transducer.
The next stage is data processing with a computer. Fourier transforms are used to convert information from the time domain into the frequency domain. A set of intermediate holograms are produced, one for each frequency bin used in the transform. Each hologram can then be deconstructed into individual waves with known propagation characteristics. These waves are back-propagated to the source surface, and the entire sound field recomposed by addition of all of the waves.[1]
Acoustic holography is becoming increasingly popular in various fields, most notably those of transportation, vehicle and aircraft design, and noise, vibration, and harshness (NVH). The general idea of acoustic holography has led to advanced processing methods such as statistically optimal near-field acoustic holography (SONAH).[4]
For audio rendering and production, Wave field synthesis and higher-order Ambisonics are related technologies, respectively modeling the acoustic pressure field on a plane, or in a spherical volume.