In mathematics, a limit is the value that a function (or sequence) approaches as the input (or index) approaches some value.[1] Limits are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals.
In formulas, a limit of a function is usually written as
and is read as "the limit of f of x as x approaches c equals L". This means that the value of the function f can be made arbitrarily close to L, by choosing x sufficiently close to c. Alternatively, the fact that a function f approaches the limit L as x approaches c is sometimes denoted by a right arrow (→ or [math]\displaystyle{ \rightarrow }[/math]), as in
which reads "[math]\displaystyle{ f }[/math] of [math]\displaystyle{ x }[/math] tends to [math]\displaystyle{ L }[/math] as [math]\displaystyle{ x }[/math] tends to [math]\displaystyle{ c }[/math]".
The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory.
The limit inferior and limit superior provide generalizations of the concept of a limit which are particularly relevant when the limit at a point may not exist.
Grégoire de Saint-Vincent gave the first definition of limit (terminus) of a geometric series in his work Opus Geometricum (1647): "The terminus of a progression is the end of the series, which none progression can reach, even not if she is continued in infinity, but which she can approach nearer than a given segment."[2]
The modern definition of a limit goes back to Bernard Bolzano who, in 1817, developed the basics of the epsilon-delta technique to define continuous functions. However, his work remained unknown to other mathematicians until thirty years after his death.[3]
Augustin-Louis Cauchy in 1821,[4] followed by Karl Weierstrass, formalized the definition of the limit of a function which became known as the (ε, δ)-definition of limit.
The modern notation of placing the arrow below the limit symbol is due to G. H. Hardy, who introduced it in his book A Course of Pure Mathematics in 1908.[5]
The expression 0.999... should be interpreted as the limit of the sequence 0.9, 0.99, 0.999, ... and so on. This sequence can be rigorously shown to have the limit 1, and therefore this expression is meaningfully interpreted as having the value 1.[6]
Formally, suppose a1, a2, … is a sequence of real numbers. When the limit of the sequence exists, the real number L is the limit of this sequence if and only if for every real number ε > 0, there exists a natural number N such that for all n > N, we have |an − L| < ε.[7] The common notation [math]\displaystyle{ \lim_{n \to \infty} a_n = L }[/math] is read as:
The formal definition intuitively means that eventually, all elements of the sequence get arbitrarily close to the limit, since the absolute value |an − L| is the distance between an and L.
Not every sequence has a limit. A sequence with a limit is called convergent; otherwise it is called divergent. One can show that a convergent sequence has only one limit.
The limit of a sequence and the limit of a function are closely related. On one hand, the limit as n approaches infinity of a sequence {an} is simply the limit at infinity of a function a(n)—defined on the natural numbers {n}. On the other hand, if X is the domain of a function f(x) and if the limit as n approaches infinity of f(xn) is L for every arbitrary sequence of points {xn} in X − x0 which converges to x0, then the limit of the function f(x) as x approaches x0 is equal to L.[8] One such sequence would be {x0 + 1/n}.
There is also a notion of having a limit "tend to infinity", rather than to a finite value [math]\displaystyle{ L }[/math]. A sequence [math]\displaystyle{ \{a_n\} }[/math] is said to "tend to infinity" if, for each real number [math]\displaystyle{ M \gt 0 }[/math], known as the bound, there exists an integer [math]\displaystyle{ N }[/math] such that for each [math]\displaystyle{ n \gt N }[/math], [math]\displaystyle{ a_n \gt M. }[/math] That is, for every possible bound, the sequence eventually exceeds the bound. This is often written [math]\displaystyle{ \lim_{n\rightarrow \infty} a_n = \infty }[/math] or simply [math]\displaystyle{ a_n \rightarrow \infty }[/math].
It is possible for a sequence to be divergent, but not tend to infinity. Such sequences are called oscillatory. An example of an oscillatory sequence is [math]\displaystyle{ a_n = (-1)^n }[/math].
There is a corresponding notion of tending to negative infinity, [math]\displaystyle{ \lim_{n\rightarrow \infty} a_n = -\infty }[/math], defined by changing the inequality in the above definition to [math]\displaystyle{ a_n \lt M, }[/math] with [math]\displaystyle{ M \lt 0. }[/math]
A sequence [math]\displaystyle{ \{a_n\} }[/math] with [math]\displaystyle{ \lim_{n\rightarrow \infty} |a_n| = \infty }[/math] is called unbounded, a definition equally valid for sequences in the complex numbers, or in any metric space. Sequences which do not tend to infinity are called bounded. Sequences which do not tend to positive infinity are called bounded above, while those which do not tend to negative infinity are bounded below.
The discussion of sequences above is for sequences of real numbers. The notion of limits can be defined for sequences valued in more abstract spaces, such as metric spaces. If [math]\displaystyle{ M }[/math] is a metric space with distance function [math]\displaystyle{ d }[/math], and [math]\displaystyle{ \{a_n\}_{n \geq 0} }[/math] is a sequence in [math]\displaystyle{ M }[/math], then the limit (when it exists) of the sequence is an element [math]\displaystyle{ a\in M }[/math] such that, given [math]\displaystyle{ \epsilon \gt 0 }[/math], there exists an [math]\displaystyle{ N }[/math] such that for each [math]\displaystyle{ n \gt N }[/math], we have [math]\displaystyle{ d(a, a_n) \lt \epsilon. }[/math] An equivalent statement is that [math]\displaystyle{ a_n \rightarrow a }[/math] if the sequence of real numbers [math]\displaystyle{ d(a, a_n) \rightarrow 0 }[/math].
An important example is the space of [math]\displaystyle{ n }[/math]-dimensional real vectors, with elements [math]\displaystyle{ \mathbf{x} = (x_1, \cdots, x_n) }[/math] where each of the [math]\displaystyle{ x_i }[/math] are real, an example of a suitable distance function is the Euclidean distance, defined by [math]\displaystyle{ d(\mathbf{x}, \mathbf{y}) = \|\mathbf{x} - \mathbf{y}\| = \sqrt{\sum_i(x_i - y_i)^2}. }[/math] The sequence of points [math]\displaystyle{ \{\mathbf{x}_n\}_{n \geq 0} }[/math] converges to [math]\displaystyle{ \mathbf{x} }[/math] if the limit exists and [math]\displaystyle{ \|\mathbf{x}_n - \mathbf{x}\| \rightarrow 0 }[/math].
In some sense the most abstract space in which limits can be defined are topological spaces. If [math]\displaystyle{ X }[/math] is a topological space with topology [math]\displaystyle{ \tau }[/math], and [math]\displaystyle{ \{a_n\}_{n \geq 0} }[/math] is a sequence in [math]\displaystyle{ X }[/math], then the limit (when it exists) of the sequence is a point [math]\displaystyle{ a\in X }[/math] such that, given a (open) neighborhood [math]\displaystyle{ U\in \tau }[/math] of [math]\displaystyle{ a }[/math], there exists an [math]\displaystyle{ N }[/math] such that for every [math]\displaystyle{ n \gt N }[/math], [math]\displaystyle{ a_n \in U }[/math] is satisfied. In this case, the limit (if it exists) may not be unique. However it must be unique if [math]\displaystyle{ X }[/math] is a Hausdorff space.
This section deals with the idea of limits of sequences of functions, not to be confused with the idea of limits of functions, discussed below.
The field of functional analysis partly seeks to identify useful notions of convergence on function spaces. For example, consider the space of functions from a generic set [math]\displaystyle{ E }[/math] to [math]\displaystyle{ \mathbb{R} }[/math]. Given a sequence of functions [math]\displaystyle{ \{f_n\}_{n \gt 0} }[/math] such that each is a function [math]\displaystyle{ f_n: E \rightarrow \mathbb{R} }[/math], suppose that there exists a function such that for each [math]\displaystyle{ x \in E }[/math], [math]\displaystyle{ f_n(x) \rightarrow f(x) \text{ or equivalently } \lim_{n \rightarrow \infty}f_n(x) = f(x). }[/math]
Then the sequence [math]\displaystyle{ f_n }[/math] is said to converge pointwise to [math]\displaystyle{ f }[/math]. However, such sequences can exhibit unexpected behavior. For example, it is possible to construct a sequence of continuous functions which has a discontinuous pointwise limit.
Another notion of convergence is uniform convergence. The uniform distance between two functions [math]\displaystyle{ f,g: E \rightarrow \mathbb{R} }[/math] is the maximum difference between the two functions as the argument [math]\displaystyle{ x \in E }[/math] is varied. That is, [math]\displaystyle{ d(f,g) = \max_{x \in E}|f(x) - g(x)|. }[/math] Then the sequence [math]\displaystyle{ f_n }[/math] is said to uniformly converge or have a uniform limit of [math]\displaystyle{ f }[/math] if [math]\displaystyle{ f_n \rightarrow f }[/math] with respect to this distance. The uniform limit has "nicer" properties than the pointwise limit. For example, the uniform limit of a sequence of continuous functions is continuous.
Many different notions of convergence can be defined on function spaces. This is sometimes dependent on the regularity of the space. Prominent examples of function spaces with some notion of convergence are Lp spaces and Sobolev space.
Suppose f is a real-valued function and c is a real number. Intuitively speaking, the expression
means that f(x) can be made to be as close to L as desired, by making x sufficiently close to c.[9] In that case, the above equation can be read as "the limit of f of x, as x approaches c, is L".
Formally, the definition of the "limit of [math]\displaystyle{ f(x) }[/math] as [math]\displaystyle{ x }[/math] approaches [math]\displaystyle{ c }[/math]" is given as follows. The limit is a real number [math]\displaystyle{ L }[/math] so that, given an arbitrary real number [math]\displaystyle{ \epsilon \gt 0 }[/math] (thought of as the "error"), there is a [math]\displaystyle{ \delta \gt 0 }[/math] such that, for any [math]\displaystyle{ x }[/math] satisfying [math]\displaystyle{ 0 \lt |x - c| \lt \delta }[/math], it holds that [math]\displaystyle{ | f(x) - L | \lt \epsilon }[/math]. This is known as the (ε, δ)-definition of limit.
The inequality [math]\displaystyle{ 0 \lt |x - c| }[/math] is used to exclude [math]\displaystyle{ c }[/math] from the set of points under consideration, but some authors do not include this in their definition of limits, replacing [math]\displaystyle{ 0 \lt |x - c| \lt \delta }[/math] with simply [math]\displaystyle{ |x - c| \lt \delta }[/math]. This replacement is equivalent to additionally requiring that [math]\displaystyle{ f }[/math] be continuous at [math]\displaystyle{ c }[/math].
It can be proven that there is an equivalent definition which makes manifest the connection between limits of sequences and limits of functions.[10] The equivalent definition is given as follows. First observe that for every sequence [math]\displaystyle{ \{x_n\} }[/math] in the domain of [math]\displaystyle{ f }[/math], there is an associated sequence [math]\displaystyle{ \{f(x_n)\} }[/math], the image of the sequence under [math]\displaystyle{ f }[/math]. The limit is a real number [math]\displaystyle{ L }[/math] so that, for all sequences [math]\displaystyle{ x_n \rightarrow c }[/math], the associated sequence [math]\displaystyle{ f(x_n) \rightarrow L }[/math].
It is possible to define the notion of having a "left-handed" limit ("from below"), and a notion of a "right-handed" limit ("from above"). These need not agree. An example is given by the positive indicator function, [math]\displaystyle{ f: \mathbb{R} \rightarrow \mathbb{R} }[/math], defined such that [math]\displaystyle{ f(x) = 0 }[/math] if [math]\displaystyle{ x \leq 0 }[/math], and [math]\displaystyle{ f(x) = 1 }[/math] if [math]\displaystyle{ x \gt 0 }[/math]. At [math]\displaystyle{ x = 0 }[/math], the function has a "left-handed limit" of 0, a "right-handed limit" of 1, and its limit does not exist. Symbolically, this can be stated as, for this example, [math]\displaystyle{ \lim_{x \to c^-}f(x) = 0 }[/math], and [math]\displaystyle{ \lim_{x \to c^+}f(x) = 1 }[/math], and from this it can be deduced [math]\displaystyle{ \lim_{x \to c}f(x) }[/math] doesn't exist, because [math]\displaystyle{ \lim_{x \to c^-}f(x) \neq \lim_{x \to c^+}f(x) }[/math].
It is possible to define the notion of "tending to infinity" in the domain of [math]\displaystyle{ f }[/math], [math]\displaystyle{ \lim_{x \rightarrow \infty} f(x) = L. }[/math]
In this expression, the infinity is considered to be signed: either [math]\displaystyle{ + \infty }[/math] or [math]\displaystyle{ - \infty }[/math]. The "limit of f as x tends to positive infinity" is defined as follows. It is a real number [math]\displaystyle{ L }[/math] such that, given any real [math]\displaystyle{ \epsilon \gt 0 }[/math], there exists an [math]\displaystyle{ M \gt 0 }[/math] so that if [math]\displaystyle{ x \gt M }[/math], [math]\displaystyle{ |f(x) - L| \lt \epsilon }[/math]. Equivalently, for any sequence [math]\displaystyle{ x_n \rightarrow + \infty }[/math], we have [math]\displaystyle{ f(x_n) \rightarrow L }[/math].
It is also possible to define the notion of "tending to infinity" in the value of [math]\displaystyle{ f }[/math], [math]\displaystyle{ \lim_{x \rightarrow c} f(x) = \infty. }[/math]
The definition is given as follows. Given any real number [math]\displaystyle{ M\gt 0 }[/math], there is a [math]\displaystyle{ \delta \gt 0 }[/math] so that for [math]\displaystyle{ 0 \lt |x - c| \lt \delta }[/math], the absolute value of the function [math]\displaystyle{ |f(x)| \gt M }[/math]. Equivalently, for any sequence [math]\displaystyle{ x_n \rightarrow c }[/math], the sequence [math]\displaystyle{ f(x_n) \rightarrow \infty }[/math].
In non-standard analysis (which involves a hyperreal enlargement of the number system), the limit of a sequence [math]\displaystyle{ (a_n) }[/math] can be expressed as the standard part of the value [math]\displaystyle{ a_H }[/math] of the natural extension of the sequence at an infinite hypernatural index n=H. Thus,
Here, the standard part function "st" rounds off each finite hyperreal number to the nearest real number (the difference between them is infinitesimal). This formalizes the natural intuition that for "very large" values of the index, the terms in the sequence are "very close" to the limit value of the sequence. Conversely, the standard part of a hyperreal [math]\displaystyle{ a=[a_n] }[/math] represented in the ultrapower construction by a Cauchy sequence [math]\displaystyle{ (a_n) }[/math], is simply the limit of that sequence:
In this sense, taking the limit and taking the standard part are equivalent procedures.
Let [math]\displaystyle{ \{a_n\}_{n \gt 0} }[/math] be a sequence in a topological space [math]\displaystyle{ X }[/math]. For concreteness, [math]\displaystyle{ X }[/math] can be thought of as [math]\displaystyle{ \mathbb{R} }[/math], but the definitions hold more generally. The limit set is the set of points such that if there is a convergent subsequence [math]\displaystyle{ \{a_{n_k}\}_{k \gt 0} }[/math] with [math]\displaystyle{ a_{n_k}\rightarrow a }[/math], then [math]\displaystyle{ a }[/math] belongs to the limit set. In this context, such an [math]\displaystyle{ a }[/math] is sometimes called a limit point.
A use of this notion is to characterize the "long-term behavior" of oscillatory sequences. For example, consider the sequence [math]\displaystyle{ a_n = (-1)^n }[/math]. Starting from n=1, the first few terms of this sequence are [math]\displaystyle{ -1, +1, -1, +1, \cdots }[/math]. It can be checked that it is oscillatory, so has no limit, but has limit points [math]\displaystyle{ \{-1, +1\} }[/math].
This notion is used in dynamical systems, to study limits of trajectories. Defining a trajectory to be a function [math]\displaystyle{ \gamma: \mathbb{R} \rightarrow X }[/math], the point [math]\displaystyle{ \gamma(t) }[/math] is thought of as the "position" of the trajectory at "time" [math]\displaystyle{ t }[/math]. The limit set of a trajectory is defined as follows. To any sequence of increasing times [math]\displaystyle{ \{t_n\} }[/math], there is an associated sequence of positions [math]\displaystyle{ \{x_n\} = \{\gamma(t_n)\} }[/math]. If [math]\displaystyle{ x }[/math] is the limit set of the sequence [math]\displaystyle{ \{x_n\} }[/math] for any sequence of increasing times, then [math]\displaystyle{ x }[/math] is a limit set of the trajectory.
Technically, this is the [math]\displaystyle{ \omega }[/math]-limit set. The corresponding limit set for sequences of decreasing time is called the [math]\displaystyle{ \alpha }[/math]-limit set.
An illustrative example is the circle trajectory: [math]\displaystyle{ \gamma(t) = (\cos(t), \sin(t)) }[/math]. This has no unique limit, but for each [math]\displaystyle{ \theta \in \mathbb{R} }[/math], the point [math]\displaystyle{ (\cos(\theta), \sin(\theta)) }[/math] is a limit point, given by the sequence of times [math]\displaystyle{ t_n = \theta + 2\pi n }[/math]. But the limit points need not be attained on the trajectory. The trajectory [math]\displaystyle{ \gamma(t) = t/(1 + t)(\cos(t), \sin(t)) }[/math] also has the unit circle as its limit set.
Limits are used to define a number of important concepts in analysis.
A particular expression of interest which is formalized as the limit of a sequence is sums of infinite series. These are "infinite sums" of real numbers, generally written as [math]\displaystyle{ \sum_{n = 1}^\infty a_n. }[/math] This is defined through limits as follows:[10] given a sequence of real numbers [math]\displaystyle{ \{a_n\} }[/math], the sequence of partial sums is defined by [math]\displaystyle{ s_n = \sum_{i = 1}^n a_i. }[/math] If the limit of the sequence [math]\displaystyle{ \{s_n\} }[/math] exists, the value of the expression [math]\displaystyle{ \sum_{n = 1}^\infty a_n }[/math] is defined to be the limit. Otherwise, the series is said to be divergent.
A classic example is the Basel problem, where [math]\displaystyle{ a_n = 1/n^2 }[/math]. Then [math]\displaystyle{ \sum_{n = 1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}. }[/math]
However, while for sequences there is essentially a unique notion of convergence, for series there are different notions of convergence. This is due to the fact that the expression [math]\displaystyle{ \sum_{n = 1}^\infty a_n }[/math] does not discriminate between different orderings of the sequence [math]\displaystyle{ \{a_n\} }[/math], while the convergence properties of the sequence of partial sums can depend on the ordering of the sequence.
A series which converges for all orderings is called unconditionally convergent. It can be proven to be equivalent to absolute convergence. This is defined as follows. A series is absolutely convergent if [math]\displaystyle{ \sum_{n = 1}^\infty |a_n| }[/math] is well defined. Furthermore, all possible orderings give the same value.
Otherwise, the series is conditionally convergent. A surprising result for conditionally convergent series is the Riemann series theorem: depending on the ordering, the partial sums can be made to converge to any real number, as well as [math]\displaystyle{ \pm \infty }[/math].
A useful application of the theory of sums of series is for power series. These are sums of series of the form [math]\displaystyle{ f(z) = \sum_{n = 0}^\infty c_n z^n. }[/math] Often [math]\displaystyle{ z }[/math] is thought of as a complex number, and a suitable notion of convergence of complex sequences is needed. The set of values of [math]\displaystyle{ z\in \mathbb{C} }[/math] for which the series sum converges is a circle, with its radius known as the radius of convergence.
The definition of continuity at a point is given through limits.
The above definition of a limit is true even if [math]\displaystyle{ f(c) \neq L }[/math]. Indeed, the function f need not even be defined at c. However, if [math]\displaystyle{ f(c) }[/math] is defined and is equal to [math]\displaystyle{ L }[/math], then the function is said to be continuous at the point [math]\displaystyle{ c }[/math].
Equivalently, the function is continuous at [math]\displaystyle{ c }[/math] if [math]\displaystyle{ f(x) \rightarrow f(c) }[/math] as [math]\displaystyle{ x \rightarrow c }[/math], or in terms of sequences, whenever [math]\displaystyle{ x_n \rightarrow c }[/math], then [math]\displaystyle{ f(x_n) \rightarrow f(c) }[/math].
An example of a limit where [math]\displaystyle{ f }[/math] is not defined at [math]\displaystyle{ c }[/math] is given below.
Consider the function
[math]\displaystyle{ f(x) = \frac{x^2 - 1}{x - 1}. }[/math]
then f(1) is not defined (see Indeterminate form), yet as x moves arbitrarily close to 1, f(x) correspondingly approaches 2:[11]
f(0.9) | f(0.99) | f(0.999) | f(1.0) | f(1.001) | f(1.01) | f(1.1) |
1.900 | 1.990 | 1.999 | undefined | 2.001 | 2.010 | 2.100 |
Thus, f(x) can be made arbitrarily close to the limit of 2—just by making x sufficiently close to 1.
In other words, [math]\displaystyle{ \lim_{x \to 1} \frac{x^2-1}{x-1} = 2. }[/math]
This can also be calculated algebraically, as [math]\displaystyle{ \frac{x^2-1}{x-1} = \frac{(x+1)(x-1)}{x-1} = x+1 }[/math] for all real numbers x ≠ 1.
Now, since x + 1 is continuous in x at 1, we can now plug in 1 for x, leading to the equation [math]\displaystyle{ \lim_{x \to 1} \frac{x^2-1}{x-1} = 1+1 = 2. }[/math]
In addition to limits at finite values, functions can also have limits at infinity. For example, consider the function [math]\displaystyle{ f(x) = \frac{2x-1}{x} }[/math] where:
As x becomes extremely large, the value of f(x) approaches 2, and the value of f(x) can be made as close to 2 as one could wish—by making x sufficiently large. So in this case, the limit of f(x) as x approaches infinity is 2, or in mathematical notation,[math]\displaystyle{ \lim_{x\to\infty}\frac{2x-1}{x} = 2. }[/math]
An important class of functions when considering limits are continuous functions. These are precisely those functions which preserve limits, in the sense that if [math]\displaystyle{ f }[/math] is a continuous function, then whenever [math]\displaystyle{ a_n \rightarrow a }[/math] in the domain of [math]\displaystyle{ f }[/math], then the limit [math]\displaystyle{ f(a_n) }[/math] exists and furthermore is [math]\displaystyle{ f(a) }[/math].
In the most general setting of topological spaces, a short proof is given below:
Let [math]\displaystyle{ f: X\rightarrow Y }[/math] be a continuous function between topological spaces [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math]. By definition, for each open set [math]\displaystyle{ V }[/math] in [math]\displaystyle{ Y }[/math], the preimage [math]\displaystyle{ f^{-1}(V) }[/math] is open in [math]\displaystyle{ X }[/math].
Now suppose [math]\displaystyle{ a_n \rightarrow a }[/math] is a sequence with limit [math]\displaystyle{ a }[/math] in [math]\displaystyle{ X }[/math]. Then [math]\displaystyle{ f(a_n) }[/math] is a sequence in [math]\displaystyle{ Y }[/math], and [math]\displaystyle{ f(a) }[/math] is some point.
Choose a neighborhood [math]\displaystyle{ V }[/math] of [math]\displaystyle{ f(a) }[/math]. Then [math]\displaystyle{ f^{-1}(V) }[/math] is an open set (by continuity of [math]\displaystyle{ f }[/math]) which in particular contains [math]\displaystyle{ a }[/math], and therefore [math]\displaystyle{ f^{-1}(V) }[/math] is a neighborhood of [math]\displaystyle{ a }[/math]. By the convergence of [math]\displaystyle{ a_n }[/math] to [math]\displaystyle{ a }[/math], there exists an [math]\displaystyle{ N }[/math] such that for [math]\displaystyle{ n \gt N }[/math], we have [math]\displaystyle{ a_n \in f^{-1}(V) }[/math].
Then applying [math]\displaystyle{ f }[/math] to both sides gives that, for the same [math]\displaystyle{ N }[/math], for each [math]\displaystyle{ n \gt N }[/math] we have [math]\displaystyle{ f(a_n) \in V }[/math]. Originally [math]\displaystyle{ V }[/math] was an arbitrary neighborhood of [math]\displaystyle{ f(a) }[/math], so [math]\displaystyle{ f(a_n) \rightarrow f(a) }[/math]. This concludes the proof.
In real analysis, for the more concrete case of real-valued functions defined on a subset [math]\displaystyle{ E \subset \mathbb{R} }[/math], that is, [math]\displaystyle{ f: E \rightarrow \mathbb{R} }[/math], a continuous function may also be defined as a function which is continuous at every point of its domain.
In topology, limits are used to define limit points of a subset of a topological space, which in turn give a useful characterization of closed sets.
In a topological space [math]\displaystyle{ X }[/math], consider a subset [math]\displaystyle{ S }[/math]. A point [math]\displaystyle{ a }[/math] is called a limit point if there is a sequence [math]\displaystyle{ \{a_n\} }[/math] in [math]\displaystyle{ S\backslash\{a\} }[/math] such that [math]\displaystyle{ a_n \rightarrow a }[/math].
The reason why [math]\displaystyle{ \{a_n\} }[/math] is defined to be in [math]\displaystyle{ S\backslash\{a\} }[/math] rather than just [math]\displaystyle{ S }[/math] is illustrated by the following example. Take [math]\displaystyle{ X = \mathbb{R} }[/math] and [math]\displaystyle{ S = [0,1] \cup \{2\} }[/math]. Then [math]\displaystyle{ 2 \in S }[/math], and therefore is the limit of the constant sequence [math]\displaystyle{ 2, 2, \cdots }[/math]. But [math]\displaystyle{ 2 }[/math] is not a limit point of [math]\displaystyle{ S }[/math].
A closed set, which is defined to be the complement of an open set, is equivalently any set [math]\displaystyle{ C }[/math] which contains all its limit points.
The derivative is defined formally as a limit. In the scope of real analysis, the derivative is first defined for real functions [math]\displaystyle{ f }[/math] defined on a subset [math]\displaystyle{ E \subset \mathbb{R} }[/math]. The derivative at [math]\displaystyle{ x \in E }[/math] is defined as follows. If the limit of [math]\displaystyle{ \frac{f(x+h) - f(x)}{h} }[/math] as [math]\displaystyle{ h \rightarrow 0 }[/math] exists, then the derivative at [math]\displaystyle{ x }[/math] is this limit.
Equivalently, it is the limit as [math]\displaystyle{ y \rightarrow x }[/math] of [math]\displaystyle{ \frac{f(y) - f(x)}{y-x}. }[/math]
If the derivative exists, it is commonly denoted by [math]\displaystyle{ f'(x) }[/math].
For sequences of real numbers, a number of properties can be proven.[10] Suppose [math]\displaystyle{ \{a_n\} }[/math] and [math]\displaystyle{ \{b_n\} }[/math] are two sequences converging to [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] respectively.
[math]\displaystyle{ a_n + b_n \rightarrow a + b. }[/math]
[math]\displaystyle{ a_n \cdot b_n \rightarrow a \cdot b. }[/math]
[math]\displaystyle{ \frac{1}{a_n} \rightarrow \frac{1}{a}. }[/math] Equivalently, the function [math]\displaystyle{ f(x) = 1/x }[/math] is continuous about nonzero [math]\displaystyle{ x }[/math].
A property of convergent sequences of real numbers is that they are Cauchy sequences.[10] The definition of a Cauchy sequence [math]\displaystyle{ \{a_n\} }[/math] is that for every real number [math]\displaystyle{ \epsilon \gt 0 }[/math], there is an [math]\displaystyle{ N }[/math] such that whenever [math]\displaystyle{ m, n \gt N }[/math], [math]\displaystyle{ |a_m - a_n| \lt \epsilon. }[/math]
Informally, for any arbitrarily small error [math]\displaystyle{ \epsilon }[/math], it is possible to find an interval of diameter [math]\displaystyle{ \epsilon }[/math] such that eventually the sequence is contained within the interval.
Cauchy sequences are closely related to convergent sequences. In fact, for sequences of real numbers they are equivalent: any Cauchy sequence is convergent.
In general metric spaces, it continues to hold that convergent sequences are also Cauchy. But the converse is not true: not every Cauchy sequence is convergent in a general metric space. A classic counterexample is the rational numbers, [math]\displaystyle{ \mathbb{Q} }[/math], with the usual distance. The sequence of decimal approximations to [math]\displaystyle{ \sqrt{2} }[/math], truncated at the [math]\displaystyle{ n }[/math]th decimal place is a Cauchy sequence, but does not converge in [math]\displaystyle{ \mathbb{Q} }[/math].
A metric space in which every Cauchy sequence is also convergent, that is, Cauchy sequences are equivalent to convergent sequences, is known as a complete metric space.
One reason Cauchy sequences can be "easier to work with" than convergent sequences is that they are a property of the sequence [math]\displaystyle{ \{a_n\} }[/math] alone, while convergent sequences require not just the sequence [math]\displaystyle{ \{a_n\} }[/math] but also the limit of the sequence [math]\displaystyle{ a }[/math].
Beyond whether or not a sequence [math]\displaystyle{ \{a_n\} }[/math] converges to a limit [math]\displaystyle{ a }[/math], it is possible to describe how fast a sequence converges to a limit. One way to quantify this is using the order of convergence of a sequence.
A formal definition of order of convergence can be stated as follows. Suppose [math]\displaystyle{ \{a_n\}_{n \gt 0} }[/math] is a sequence of real numbers which is convergent with limit [math]\displaystyle{ a }[/math]. Furthermore, [math]\displaystyle{ a_n \neq a }[/math] for all [math]\displaystyle{ n }[/math]. If positive constants [math]\displaystyle{ \lambda }[/math] and [math]\displaystyle{ \alpha }[/math] exist such that [math]\displaystyle{ \lim_{n \to \infty } \frac{ \left| a_{n+1} - a \right| }{ \left| a_n - a \right| ^\alpha } = \lambda }[/math] then [math]\displaystyle{ a_n }[/math] is said to converge to [math]\displaystyle{ a }[/math] with order of convergence [math]\displaystyle{ \alpha }[/math]. The constant [math]\displaystyle{ \lambda }[/math] is known as the asymptotic error constant.
Order of convergence is used for example the field of numerical analysis, in error analysis.
Limits can be difficult to compute. There exist limit expressions whose modulus of convergence is undecidable. In recursion theory, the limit lemma proves that it is possible to encode undecidable problems using limits.[12]
There are several theorems or tests that indicate whether the limit exists. These are known as convergence tests. Examples include the ratio test and the squeeze theorem. However they may not tell how to compute the limit.
Library resources about Limit (mathematics) |
Original source: https://en.wikipedia.org/wiki/Limit (mathematics).
Read more |