In physics and mathematics, the phase (symbol φ or ϕ) of a wave or other periodic function [math]\displaystyle{ F }[/math] of some real variable [math]\displaystyle{ t }[/math] (such as time) is an angle-like quantity representing the fraction of the cycle covered up to [math]\displaystyle{ t }[/math]. It is expressed in such a scale that it varies by one full turn as the variable [math]\displaystyle{ t }[/math] goes through each period (and [math]\displaystyle{ F(t) }[/math] goes through each complete cycle). It may be measured in any angular unit such as degrees or radians, thus increasing by 360° or [math]\displaystyle{ 2\pi }[/math] as the variable [math]\displaystyle{ t }[/math] completes a full period.[1]
This convention is especially appropriate for a sinusoidal function, since its value at any argument [math]\displaystyle{ t }[/math] then can be expressed as [math]\displaystyle{ \varphi(t) }[/math], the sine of the phase, multiplied by some factor (the amplitude of the sinusoid). (The cosine may be used instead of sine, depending on where one considers each period to start.)
Usually, whole turns are ignored when expressing the phase; so that [math]\displaystyle{ \varphi(t) }[/math] is also a periodic function, with the same period as [math]\displaystyle{ F }[/math], that repeatedly scans the same range of angles as [math]\displaystyle{ t }[/math] goes through each period. Then, [math]\displaystyle{ F }[/math] is said to be "at the same phase" at two argument values [math]\displaystyle{ t_1 }[/math] and [math]\displaystyle{ t_2 }[/math] (that is, [math]\displaystyle{ \varphi(t_1) = \varphi(t_2) }[/math]) if the difference between them is a whole number of periods.
The numeric value of the phase [math]\displaystyle{ \varphi(t) }[/math] depends on the arbitrary choice of the start of each period, and on the interval of angles that each period is to be mapped to.
The term "phase" is also used when comparing a periodic function [math]\displaystyle{ F }[/math] with a shifted version [math]\displaystyle{ G }[/math] of it. If the shift in [math]\displaystyle{ t }[/math] is expressed as a fraction of the period, and then scaled to an angle [math]\displaystyle{ \varphi }[/math] spanning a whole turn, one gets the phase shift, phase offset, or phase difference of [math]\displaystyle{ G }[/math] relative to [math]\displaystyle{ F }[/math]. If [math]\displaystyle{ F }[/math] is a "canonical" function for a class of signals, like [math]\displaystyle{ \sin(t) }[/math] is for all sinusoidal signals, then [math]\displaystyle{ \varphi }[/math] is called the initial phase of [math]\displaystyle{ G }[/math].
Let [math]\displaystyle{ F }[/math] be a periodic signal (that is, a function of one real variable), and [math]\displaystyle{ T }[/math] be its period (that is, the smallest positive real number such that [math]\displaystyle{ F(t + T) = F(t) }[/math] for all [math]\displaystyle{ t }[/math]). Then the phase of [math]\displaystyle{ F }[/math] at any argument [math]\displaystyle{ t }[/math] is [math]\displaystyle{ \varphi(t) = 2\pi\left[\!\!\left[\frac{t - t_0}{T}\right]\!\!\right] }[/math]
Here [math]\displaystyle{ [\![\,\cdot\,]\!]\!\, }[/math] denotes the fractional part of a real number, discarding its integer part; that is, [math]\displaystyle{ [\![ x ]\!] = x - \left\lfloor x \right\rfloor\!\, }[/math]; and [math]\displaystyle{ t_0 }[/math] is an arbitrary "origin" value of the argument, that one considers to be the beginning of a cycle.
This concept can be visualized by imagining a clock with a hand that turns at constant speed, making a full turn every [math]\displaystyle{ T }[/math] seconds, and is pointing straight up at time [math]\displaystyle{ t_0 }[/math]. The phase [math]\displaystyle{ \varphi(t) }[/math] is then the angle from the 12:00 position to the current position of the hand, at time [math]\displaystyle{ t }[/math], measured clockwise.
The phase concept is most useful when the origin [math]\displaystyle{ t_0 }[/math] is chosen based on features of [math]\displaystyle{ F }[/math]. For example, for a sinusoid, a convenient choice is any [math]\displaystyle{ t }[/math] where the function's value changes from zero to positive.
The formula above gives the phase as an angle in radians between 0 and [math]\displaystyle{ 2\pi }[/math]. To get the phase as an angle between [math]\displaystyle{ -\pi }[/math] and [math]\displaystyle{ +\pi }[/math], one uses instead [math]\displaystyle{ \varphi(t) = 2\pi\left(\left[\!\!\left[\frac{t - t_0}{T} + \frac{1}{2}\right]\!\!\right] - \frac{1}{2}\right) }[/math]
The phase expressed in degrees (from 0° to 360°, or from −180° to +180°) is defined the same way, except with "360°" in place of "2π".
With any of the above definitions, the phase [math]\displaystyle{ \varphi(t) }[/math] of a periodic signal is periodic too, with the same period [math]\displaystyle{ T }[/math]: [math]\displaystyle{ \varphi(t + T) = \varphi(t)\quad\quad \text{ for all } t. }[/math]
The phase is zero at the start of each period; that is [math]\displaystyle{ \varphi(t_0 + kT) = 0\quad\quad \text{ for any integer } k. }[/math]
Moreover, for any given choice of the origin [math]\displaystyle{ t_0 }[/math], the value of the signal [math]\displaystyle{ F }[/math] for any argument [math]\displaystyle{ t }[/math] depends only on its phase at [math]\displaystyle{ t }[/math]. Namely, one can write [math]\displaystyle{ F(t) = f(\varphi(t)) }[/math], where [math]\displaystyle{ f }[/math] is a function of an angle, defined only for a single full turn, that describes the variation of [math]\displaystyle{ F }[/math] as [math]\displaystyle{ t }[/math] ranges over a single period.
In fact, every periodic signal [math]\displaystyle{ F }[/math] with a specific waveform can be expressed as [math]\displaystyle{ F(t) = A\,w(\varphi(t)) }[/math] where [math]\displaystyle{ w }[/math] is a "canonical" function of a phase angle in 0 to 2π, that describes just one cycle of that waveform; and [math]\displaystyle{ A }[/math] is a scaling factor for the amplitude. (This claim assumes that the starting time [math]\displaystyle{ t_0 }[/math] chosen to compute the phase of [math]\displaystyle{ F }[/math] corresponds to argument 0 of [math]\displaystyle{ w }[/math].)
Since phases are angles, any whole full turns should usually be ignored when performing arithmetic operations on them. That is, the sum and difference of two phases (in degrees) should be computed by the formulas [math]\displaystyle{ 360\,\left[\!\!\left[\frac{\alpha + \beta}{360}\right]\!\!\right]\quad\quad \text{ and } \quad\quad 360\,\left[\!\!\left[\frac{\alpha - \beta}{360}\right]\!\!\right] }[/math] respectively. Thus, for example, the sum of phase angles 190° + 200° is 30° (190 + 200 = 390, minus one full turn), and subtracting 50° from 30° gives a phase of 340° (30 − 50 = −20, plus one full turn).
Similar formulas hold for radians, with [math]\displaystyle{ 2\pi }[/math] instead of 360.
The difference [math]\displaystyle{ \varphi(t) = \varphi_G(t) - \varphi_F(t) }[/math] between the phases of two periodic signals [math]\displaystyle{ F }[/math] and [math]\displaystyle{ G }[/math] is called the phase difference or phase shift of [math]\displaystyle{ G }[/math] relative to [math]\displaystyle{ F }[/math].[1] At values of [math]\displaystyle{ t }[/math] when the difference is zero, the two signals are said to be in phase; otherwise, they are out of phase with each other.
In the clock analogy, each signal is represented by a hand (or pointer) of the same clock, both turning at constant but possibly different speeds. The phase difference is then the angle between the two hands, measured clockwise.
The phase difference is particularly important when two signals are added together by a physical process, such as two periodic sound waves emitted by two sources and recorded together by a microphone. This is usually the case in linear systems, when the superposition principle holds.
For arguments [math]\displaystyle{ t }[/math] when the phase difference is zero, the two signals will have the same sign and will be reinforcing each other. One says that constructive interference is occurring. At arguments [math]\displaystyle{ t }[/math] when the phases are different, the value of the sum depends on the waveform.
For sinusoidal signals, when the phase difference [math]\displaystyle{ \varphi(t) }[/math] is 180° ([math]\displaystyle{ \pi }[/math] radians), one says that the phases are opposite, and that the signals are in antiphase. Then the signals have opposite signs, and destructive interference occurs. Conversely, a phase reversal or phase inversion implies a 180-degree phase shift.[2]
When the phase difference [math]\displaystyle{ \varphi(t) }[/math] is a quarter of turn (a right angle, +90° = π/2 or −90° = 270° = −π/2 = 3π/2), sinusoidal signals are sometimes said to be in quadrature, e.g., in-phase and quadrature components of a composite signal or even different signals (e.g., voltage and current).
If the frequencies are different, the phase difference [math]\displaystyle{ \varphi(t) }[/math] increases linearly with the argument [math]\displaystyle{ t }[/math]. The periodic changes from reinforcement and opposition cause a phenomenon called beating.
The phase difference is especially important when comparing a periodic signal [math]\displaystyle{ F }[/math] with a shifted and possibly scaled version [math]\displaystyle{ G }[/math] of it. That is, suppose that [math]\displaystyle{ G(t) = \alpha\,F(t + \tau) }[/math] for some constants [math]\displaystyle{ \alpha,\tau }[/math] and all [math]\displaystyle{ t }[/math]. Suppose also that the origin for computing the phase of [math]\displaystyle{ G }[/math] has been shifted too. In that case, the phase difference [math]\displaystyle{ \varphi }[/math] is a constant (independent of [math]\displaystyle{ t }[/math]), called the 'phase shift' or 'phase offset' of [math]\displaystyle{ G }[/math] relative to [math]\displaystyle{ F }[/math]. In the clock analogy, this situation corresponds to the two hands turning at the same speed, so that the angle between them is constant.
In this case, the phase shift is simply the argument shift [math]\displaystyle{ \tau }[/math], expressed as a fraction of the common period [math]\displaystyle{ T }[/math] (in terms of the modulo operation) of the two signals and then scaled to a full turn: [math]\displaystyle{ \varphi = 2\pi \left[\!\!\left[ \frac{\tau}{T} \right]\!\!\right]. }[/math]
If [math]\displaystyle{ F }[/math] is a "canonical" representative for a class of signals, like [math]\displaystyle{ \sin(t) }[/math] is for all sinusoidal signals, then the phase shift [math]\displaystyle{ \varphi }[/math] called simply the initial phase of [math]\displaystyle{ G }[/math].
Therefore, when two periodic signals have the same frequency, they are always in phase, or always out of phase. Physically, this situation commonly occurs, for many reasons. For example, the two signals may be a periodic soundwave recorded by two microphones at separate locations. Or, conversely, they may be periodic soundwaves created by two separate speakers from the same electrical signal, and recorded by a single microphone. They may be a radio signal that reaches the receiving antenna in a straight line, and a copy of it that was reflected off a large building nearby.
A well-known example of phase difference is the length of shadows seen at different points of Earth. To a first approximation, if [math]\displaystyle{ F(t) }[/math] is the length seen at time [math]\displaystyle{ t }[/math] at one spot, and [math]\displaystyle{ G }[/math] is the length seen at the same time at a longitude 30° west of that point, then the phase difference between the two signals will be 30° (assuming that, in each signal, each period starts when the shadow is shortest).
For sinusoidal signals (and a few other waveforms, like square or symmetric triangular), a phase shift of 180° is equivalent to a phase shift of 0° with negation of the amplitude. When two signals with these waveforms, same period, and opposite phases are added together, the sum [math]\displaystyle{ F+G }[/math] is either identically zero, or is a sinusoidal signal with the same period and phase, whose amplitude is the difference of the original amplitudes.
The phase shift of the co-sine function relative to the sine function is +90°. It follows that, for two sinusoidal signals [math]\displaystyle{ F }[/math] and [math]\displaystyle{ G }[/math] with same frequency and amplitudes [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math], and [math]\displaystyle{ G }[/math] has phase shift +90° relative to [math]\displaystyle{ F }[/math], the sum [math]\displaystyle{ F+G }[/math] is a sinusoidal signal with the same frequency, with amplitude [math]\displaystyle{ C }[/math] and phase shift [math]\displaystyle{ -90^\circ \lt \varphi \lt +90^\circ }[/math] from [math]\displaystyle{ F }[/math], such that [math]\displaystyle{ C = \sqrt{A^2 + B^2} \quad\quad \text{ and } \quad\quad \sin(\varphi) = B/C. }[/math]
File:Out of phase AE.gif A real-world example of a sonic phase difference occurs in the warble of a Native American flute. The amplitude of different harmonic components of same long-held note on the flute come into dominance at different points in the phase cycle. The phase difference between the different harmonics can be observed on a spectrogram of the sound of a warbling flute.[4]
Phase comparison is a comparison of the phase of two waveforms, usually of the same nominal frequency. In time and frequency, the purpose of a phase comparison is generally to determine the frequency offset (difference between signal cycles) with respect to a reference.[3]
A phase comparison can be made by connecting two signals to a two-channel oscilloscope. The oscilloscope will display two sine signals, as shown in the graphic to the right. In the adjacent image, the top sine signal is the test frequency, and the bottom sine signal represents a signal from the reference.
If the two frequencies were exactly the same, their phase relationship would not change and both would appear to be stationary on the oscilloscope display. Since the two frequencies are not exactly the same, the reference appears to be stationary and the test signal moves. By measuring the rate of motion of the test signal the offset between frequencies can be determined.
Vertical lines have been drawn through the points where each sine signal passes through zero. The bottom of the figure shows bars whose width represents the phase difference between the signals. In this case the phase difference is increasing, indicating that the test signal is lower in frequency than the reference.[3]
The phase of a simple harmonic oscillation or sinusoidal signal is the value of [math]\displaystyle{ \varphi }[/math] in the following functions: [math]\displaystyle{ \begin{align} x(t) &= A\cos( 2 \pi f t + \varphi ) \\ y(t) &= A\sin( 2 \pi f t + \varphi ) = A\cos\left( 2 \pi f t + \varphi - \tfrac{\pi}{2}\right) \end{align} }[/math] where [math]\displaystyle{ A }[/math], [math]\displaystyle{ f }[/math], and [math]\displaystyle{ \varphi }[/math] are constant parameters called the amplitude, frequency, and phase of the sinusoid. These signals are periodic with period [math]\displaystyle{ T = \frac{1}{f} }[/math], and they are identical except for a displacement of [math]\displaystyle{ \frac{T}{4} }[/math] along the [math]\displaystyle{ t }[/math] axis. The term phase can refer to several different things:
Original source: https://en.wikipedia.org/wiki/Phase (waves).
Read more |