In numerical analysis, order of accuracy quantifies the rate of convergence of a numerical approximation of a differential equation to the exact solution. Consider [math]\displaystyle{ u }[/math], the exact solution to a differential equation in an appropriate normed space [math]\displaystyle{ (V,||\ ||) }[/math]. Consider a numerical approximation [math]\displaystyle{ u_h }[/math], where [math]\displaystyle{ h }[/math] is a parameter characterizing the approximation, such as the step size in a finite difference scheme or the diameter of the cells in a finite element method. The numerical solution [math]\displaystyle{ u_h }[/math] is said to be [math]\displaystyle{ n }[/math]th-order accurate if the error [math]\displaystyle{ E(h):= ||u-u_h|| }[/math] is proportional to the step-size [math]\displaystyle{ h }[/math] to the [math]\displaystyle{ n }[/math]th power:[1]
where the constant [math]\displaystyle{ C }[/math] is independent of [math]\displaystyle{ h }[/math] and usually depends on the solution [math]\displaystyle{ u }[/math].[2] Using the big O notation an [math]\displaystyle{ n }[/math]th-order accurate numerical method is notated as
This definition is strictly dependent on the norm used in the space; the choice of such norm is fundamental to estimate the rate of convergence and, in general, all numerical errors correctly.
The size of the error of a first-order accurate approximation is directly proportional to [math]\displaystyle{ h }[/math]. Partial differential equations which vary over both time and space are said to be accurate to order [math]\displaystyle{ n }[/math] in time and to order [math]\displaystyle{ m }[/math] in space.[3]
Original source: https://en.wikipedia.org/wiki/Order of accuracy.
Read more |
Categories: [Numerical analysis]