The approximation error in a data value is the discrepancy between an exact value and some approximation to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute error divided by the data value).
An approximation error can occur for a variety of reasons, among them a computing machine precision or measurement error (e.g. the length of a piece of paper is 4.53 cm but the ruler only allows you to estimate it to the nearest 0.1 cm, so you measure it as 4.5 cm).
In the mathematical field of numerical analysis, the numerical stability of an algorithm indicates the extent to which errors in the input of the algorithm will lead to large errors of the output; numerically stable algorithms to not yield a significant error in output when the input is malformed and vice versa.[1]
Given some value v, we say that vapprox approximates v with absolute error ε>0 if [2][3]
where the vertical bars denote the absolute value.
We say that vapprox approximates v with relative error η>0 if
[math]\displaystyle{ |v-v_\text{approx}| \leq \eta\cdot v }[/math].
If v ≠ 0, then
The percent error (an expression of the relative error) is [3]
An error bound is an upper limit on the relative or absolute size of an approximation error.[4]
As an example, if the exact value is 50 and the approximation is 49.9, then the absolute error is 0.1 and the relative error is 0.1/50 = 0.002 = 0.2%. As a practical example, when measuring a 6 mL beaker, the value read was 5 mL. The correct reading being 6 mL, this means the percent error in that particular situation is, rounded, 16.7%.
The relative error is often used to compare approximations of numbers of widely differing size; for example, approximating the number 1,000 with an absolute error of 3 is, in most applications, much worse than approximating the number 1,000,000 with an absolute error of 3; in the first case the relative error is 0.003 while in the second it is only 0.000003.
There are two features of relative error that should be kept in mind. First, relative error is undefined when the true value is zero as it appears in the denominator (see below). Second, relative error only makes sense when measured on a ratio scale, (i.e. a scale which has a true meaningful zero), otherwise it is sensitive to the measurement units. For example, when an absolute error in a temperature measurement given in Celsius scale is 1 °C, and the true value is 2 °C, the relative error is 0.5. But if the exact same approximation is made with the Kelvin scale, a 1 K absolute error with the same true value of 275.15 K = 2 °C gives a relative error of 3.63×10−3.
Statements about relative errors are sensitive to addition of constants, but not to multiplication by constants. For absolute errors, the opposite is true: are sensitive to multiplication by constants, but not to addition of constants.[5](p34)
We say that a real value v is polynomially computable with absolute error from an input if, for every rational number ε>0, it is possible to compute a rational number vapprox that approximates v with absolute error ε, in time polynomial in the size of the input and the encoding size of ε (which is O(log(1/ε)). Analogously, v is polynomially computable with relative error if, for every rational number η>0, it is possible to compute a rational number vapprox that approximates v with relative error η, in time polynomial in the size of the input and the encoding size of η.
If v is polynomially computable with relative error (by some algorithm called REL), then it is also polynomially computable with absolute error. Proof. Let ε>0 be the desired absolute error. First, use REL with relative error η=1/2; find a rational number r1 such that |v-r1| ≤ |v|/2, and hence |v| ≤ 2 |r1|. If r1=0, then v=0 and we are done. Since REL is polynomial, the encoding length of r1 is polynomial in the input. Now, run REL again with relative error η=ε/(2 |r1|). This yields a rational number r2 that satisfies |v-r2| ≤ ε|v| / (2r1) ≤ ε, so it has absolute error ε as wished.[5](p34)
The reverse implication is usually not true. But, if we assume that some positive lower bound on |v| can be computed in polynomial time, e.g. |v| > b > 0, and v is polynomially computable with absolute error (by some algorithm called ABS), then it is also polynomially computable with relative error, since we can simply call ABS with absolute error ε = η b.
An algorithm that, for every rational number η>0, computes a rational number vapprox that approximates v with relative error η, in time polynomial in the size of the input and 1/η (rather than log(1/η)), is called an FPTAS.
In most indicating instruments, the accuracy is guaranteed to a certain percentage of full-scale reading. The limits of these deviations from the specified values are known as limiting errors or guarantee errors.[6]
The definitions can be extended to the case when [math]\displaystyle{ v }[/math] and [math]\displaystyle{ v_{\text{approx}} }[/math] are n-dimensional vectors, by replacing the absolute value with an n-norm.[7]
Original source: https://en.wikipedia.org/wiki/Approximation error.
Read more |