In mathematics, a formal series is an infinite sum that is considered independently from any notion of convergence, and can be manipulated with the usual algebraic operations on series (addition, subtraction, multiplication, division, partial sums, etc.).
A formal power series is a special kind of formal series, whose terms are of the form [math]\displaystyle{ a x^n }[/math] where [math]\displaystyle{ x^n }[/math] is the [math]\displaystyle{ n }[/math]th power of a variable [math]\displaystyle{ x }[/math] ([math]\displaystyle{ n }[/math] is a non-negative integer), and [math]\displaystyle{ a }[/math] is called the coefficient. Hence, power series can be viewed as a generalization of polynomials, where the number of terms is allowed to be infinite, with no requirements of convergence. Thus, the series may no longer represent a function of its variable, merely a formal sequence of coefficients, in contrast to a power series, which defines a function by taking numerical values for the variable within a radius of convergence. In a formal power series, the [math]\displaystyle{ x^n }[/math] are used only as position-holders for the coefficients, so that the coefficient of [math]\displaystyle{ x^5 }[/math] is the fifth term in the sequence. In combinatorics, the method of generating functions uses formal power series to represent numerical sequences and multisets, for instance allowing concise expressions for recursively defined sequences regardless of whether the recursion can be explicitly solved. More generally, formal power series can include series with any finite (or countable) number of variables, and with coefficients in an arbitrary ring.
Rings of formal power series are complete local rings, and this allows using calculus-like methods in the purely algebraic framework of algebraic geometry and commutative algebra. They are analogous in many ways to p-adic integers, which can be defined as formal series of the powers of p.
A formal power series can be loosely thought of as an object that is like a polynomial, but with infinitely many terms. Alternatively, for those familiar with power series (or Taylor series), one may think of a formal power series as a power series in which we ignore questions of convergence by not assuming that the variable X denotes any numerical value (not even an unknown value). For example, consider the series
If we studied this as a power series, its properties would include, for example, that its radius of convergence is 1. However, as a formal power series, we may ignore this completely; all that is relevant is the sequence of coefficients [1, −3, 5, −7, 9, −11, ...]. In other words, a formal power series is an object that just records a sequence of coefficients. It is perfectly acceptable to consider a formal power series with the factorials [1, 1, 2, 6, 24, 120, 720, 5040, ... ] as coefficients, even though the corresponding power series diverges for any nonzero value of X.
Arithmetic on formal power series is carried out by simply pretending that the series are polynomials. For example, if
then we add A and B term by term:
We can multiply formal power series, again just by treating them as polynomials (see in particular Cauchy product):
Notice that each coefficient in the product AB only depends on a finite number of coefficients of A and B. For example, the X5 term is given by
For this reason, one may multiply formal power series without worrying about the usual questions of absolute, conditional and uniform convergence which arise in dealing with power series in the setting of analysis.
Once we have defined multiplication for formal power series, we can define multiplicative inverses as follows. The multiplicative inverse of a formal power series A is a formal power series C such that AC = 1, provided that such a formal power series exists. It turns out that if A has a multiplicative inverse, it is unique, and we denote it by A−1. Now we can define division of formal power series by defining B/A to be the product BA−1, provided that the inverse of A exists. For example, one can use the definition of multiplication above to verify the familiar formula
An important operation on formal power series is coefficient extraction. In its most basic form, the coefficient extraction operator [math]\displaystyle{ [X^n] }[/math] applied to a formal power series [math]\displaystyle{ A }[/math] in one variable extracts the coefficient of the [math]\displaystyle{ n }[/math]th power of the variable, so that [math]\displaystyle{ [X^2]A=5 }[/math] and [math]\displaystyle{ [X^5]A=-11 }[/math]. Other examples include
Similarly, many other operations that are carried out on polynomials can be extended to the formal power series setting, as explained below.
Algebraic structure → Ring theory Ring theory |
---|
If one considers the set of all formal power series in X with coefficients in a commutative ring R, the elements of this set collectively constitute another ring which is written [math]\displaystyle{ RX, }[/math] and called the ring of formal power series in the variable X over R.
One can characterize [math]\displaystyle{ RX }[/math] abstractly as the completion of the polynomial ring [math]\displaystyle{ R[X] }[/math] equipped with a particular metric. This automatically gives [math]\displaystyle{ RX }[/math] the structure of a topological ring (and even of a complete metric space). But the general construction of a completion of a metric space is more involved than what is needed here, and would make formal power series seem more complicated than they are. It is possible to describe [math]\displaystyle{ RX }[/math] more explicitly, and define the ring structure and topological structure separately, as follows.
As a set, [math]\displaystyle{ RX }[/math] can be constructed as the set [math]\displaystyle{ R^\N }[/math] of all infinite sequences of elements of [math]\displaystyle{ R }[/math], indexed by the natural numbers (taken to include 0). Designating a sequence whose term at index [math]\displaystyle{ n }[/math] is [math]\displaystyle{ a_n }[/math] by [math]\displaystyle{ (a_n) }[/math], one defines addition of two such sequences by
and multiplication by
This type of product is called the Cauchy product of the two sequences of coefficients, and is a sort of discrete convolution. With these operations, [math]\displaystyle{ R^\N }[/math] becomes a commutative ring with zero element [math]\displaystyle{ (0,0,0,\ldots) }[/math] and multiplicative identity [math]\displaystyle{ (1,0,0,\ldots) }[/math].
The product is in fact the same one used to define the product of polynomials in one indeterminate, which suggests using a similar notation. One embeds [math]\displaystyle{ R }[/math] into [math]\displaystyle{ RX }[/math] by sending any (constant) [math]\displaystyle{ a \in R }[/math] to the sequence [math]\displaystyle{ (a,0,0,\ldots) }[/math] and designates the sequence [math]\displaystyle{ (0,1,0,0,\ldots) }[/math] by [math]\displaystyle{ X }[/math]; then using the above definitions every sequence with only finitely many nonzero terms can be expressed in terms of these special elements as
these are precisely the polynomials in [math]\displaystyle{ X }[/math]. Given this, it is quite natural and convenient to designate a general sequence [math]\displaystyle{ (a_n)_{n\in\N} }[/math] by the formal expression [math]\displaystyle{ \textstyle\sum_{i\in\N}a_i X^i }[/math], even though the latter is not an expression formed by the operations of addition and multiplication defined above (from which only finite sums can be constructed). This notational convention allows reformulation of the above definitions as
and
which is quite convenient, but one must be aware of the distinction between formal summation (a mere convention) and actual addition.
Having stipulated conventionally that
[math]\displaystyle{ (a_0, a_1, a_2, a_3, \ldots) = \sum_{i=0}^\infty a_i X^i, }[/math] |
|
( ) |
one would like to interpret the right hand side as a well-defined infinite summation. To that end, a notion of convergence in [math]\displaystyle{ R^\N }[/math] is defined and a topology on [math]\displaystyle{ R^\N }[/math] is constructed. There are several equivalent ways to define the desired topology.
Informally, two sequences [math]\displaystyle{ (a_n) }[/math] and [math]\displaystyle{ (b_n) }[/math] become closer and closer if and only if more and more of their terms agree exactly. Formally, the sequence of partial sums of some infinite summation converges if for every fixed power of [math]\displaystyle{ X }[/math] the coefficient stabilizes: there is a point beyond which all further partial sums have the same coefficient. This is clearly the case for the right hand side of (1), regardless of the values [math]\displaystyle{ a_n }[/math], since inclusion of the term for [math]\displaystyle{ i=n }[/math] gives the last (and in fact only) change to the coefficient of [math]\displaystyle{ X^n }[/math]. It is also obvious that the limit of the sequence of partial sums is equal to the left hand side.
This topological structure, together with the ring operations described above, form a topological ring. This is called the ring of formal power series over [math]\displaystyle{ R }[/math] and is denoted by [math]\displaystyle{ RX }[/math]. The topology has the useful property that an infinite summation converges if and only if the sequence of its terms converges to 0, which just means that any fixed power of [math]\displaystyle{ X }[/math] occurs in only finitely many terms.
The topological structure allows much more flexible usage of infinite summations. For instance the rule for multiplication can be restated simply as
since only finitely many terms on the right affect any fixed [math]\displaystyle{ X^n }[/math]. Infinite products are also defined by the topological structure; it can be seen that an infinite product converges if and only if the sequence of its factors converges to 1 (in which case the product is nonzero) or infinitely many factors have no constant term (in which case the product is zero).
The above topology is the finest topology for which
always converges as a summation to the formal power series designated by the same expression, and it often suffices to give a meaning to infinite sums and products, or other kinds of limits that one wishes to use to designate particular formal power series. It can however happen occasionally that one wishes to use a coarser topology, so that certain expressions become convergent that would otherwise diverge. This applies in particular when the base ring [math]\displaystyle{ R }[/math] already comes with a topology other than the discrete one, for instance if it is also a ring of formal power series.
In the ring of formal power series [math]\displaystyle{ \ZXY }[/math], the topology of above construction only relates to the indeterminate [math]\displaystyle{ Y }[/math], since the topology that was put on [math]\displaystyle{ \ZX }[/math] has been replaced by the discrete topology when defining the topology of the whole ring. So
converges (and its sum can be written as [math]\displaystyle{ \tfrac{X}{1-Y} }[/math]); however
would be considered to be divergent, since every term affects the coefficient of [math]\displaystyle{ Y }[/math]. This asymmetry disappears if the power series ring in [math]\displaystyle{ Y }[/math] is given the product topology where each copy of [math]\displaystyle{ \ZX }[/math] is given its topology as a ring of formal power series rather than the discrete topology. With this topology, a sequence of elements of [math]\displaystyle{ \ZXY }[/math] converges if the coefficient of each power of [math]\displaystyle{ Y }[/math] converges to a formal power series in [math]\displaystyle{ X }[/math], a weaker condition than stabilizing entirely. For instance, with this topology, in the second example given above, the coefficient of [math]\displaystyle{ Y }[/math]converges to [math]\displaystyle{ \tfrac{1}{1-X} }[/math], so the whole summation converges to [math]\displaystyle{ \tfrac{Y}{1-X} }[/math].
This way of defining the topology is in fact the standard one for repeated constructions of rings of formal power series, and gives the same topology as one would get by taking formal power series in all indeterminates at once. In the above example that would mean constructing [math]\displaystyle{ \ZX,Y }[/math] and here a sequence converges if and only if the coefficient of every monomial [math]\displaystyle{ X^iY^j }[/math] stabilizes. This topology, which is also the [math]\displaystyle{ I }[/math]-adic topology, where [math]\displaystyle{ I=(X,Y) }[/math] is the ideal generated by [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math], still enjoys the property that a summation converges if and only if its terms tend to 0.
The same principle could be used to make other divergent limits converge. For instance in [math]\displaystyle{ \RX }[/math] the limit
does not exist, so in particular it does not converge to
This is because for [math]\displaystyle{ i\geq 2 }[/math] the coefficient [math]\displaystyle{ \tbinom{n}{i}/n^i }[/math] of [math]\displaystyle{ X^i }[/math] does not stabilize as [math]\displaystyle{ n\to \infty }[/math]. It does however converge in the usual topology of [math]\displaystyle{ \R }[/math], and in fact to the coefficient [math]\displaystyle{ \tfrac{1}{i!} }[/math] of [math]\displaystyle{ \exp(X) }[/math]. Therefore, if one would give [math]\displaystyle{ \RX }[/math] the product topology of [math]\displaystyle{ \R^\N }[/math] where the topology of [math]\displaystyle{ \R }[/math] is the usual topology rather than the discrete one, then the above limit would converge to [math]\displaystyle{ \exp(X) }[/math]. This more permissive approach is not however the standard when considering formal power series, as it would lead to convergence considerations that are as subtle as they are in analysis, while the philosophy of formal power series is on the contrary to make convergence questions as trivial as they can possibly be. With this topology it would not be the case that a summation converges if and only if its terms tend to 0.
The ring [math]\displaystyle{ RX }[/math] may be characterized by the following universal property. If [math]\displaystyle{ S }[/math] is a commutative associative algebra over [math]\displaystyle{ R }[/math], if [math]\displaystyle{ I }[/math] is an ideal of [math]\displaystyle{ S }[/math] such that the [math]\displaystyle{ I }[/math]-adic topology on [math]\displaystyle{ S }[/math] is complete, and if [math]\displaystyle{ x }[/math] is an element of [math]\displaystyle{ I }[/math], then there is a unique [math]\displaystyle{ \Phi: RX\to S }[/math] with the following properties:
One can perform algebraic operations on power series to generate new power series.[1][2] Besides the ring structure operations defined above, we have the following.
For any natural number n we have [math]\displaystyle{ \left( \sum_{k=0}^\infty a_k X^k \right)^{\!n} =\, \sum_{m=0}^\infty c_m X^m, }[/math] where [math]\displaystyle{ \begin{align} c_0 &= a_0^n,\\ c_m &= \frac{1}{m a_0} \sum_{k=1}^m (kn - m+k) a_{k} c_{m-k}, \ \ \ m \geq 1. \end{align} }[/math]
(This formula can only be used if m and a0 are invertible in the ring of coefficients.)
In the case of formal power series with complex coefficients, the complex powers are well defined at least for series f with constant term equal to 1. In this case, [math]\displaystyle{ f^{\alpha} }[/math] can be defined either by composition with the binomial series (1+x)α, or by composition with the exponential and the logarithmic series, [math]\displaystyle{ f^{\alpha} = \exp(\alpha\log(f)), }[/math] or as the solution of the differential equation [math]\displaystyle{ f( f^{\alpha})' = \alpha f^{\alpha} f' }[/math] with constant term 1, the three definitions being equivalent. The rules of calculus [math]\displaystyle{ (f^\alpha)^\beta = f^{\alpha\beta} }[/math] and [math]\displaystyle{ f^\alpha g^\alpha = (fg)^\alpha }[/math] easily follow.
The series
is invertible in [math]\displaystyle{ RX }[/math] if and only if its constant coefficient [math]\displaystyle{ a_0 }[/math] is invertible in [math]\displaystyle{ R }[/math]. This condition is necessary, for the following reason: if we suppose that [math]\displaystyle{ A }[/math] has an inverse [math]\displaystyle{ B = b_0 + b_1 x + \cdots }[/math] then the constant term [math]\displaystyle{ a_0b_0 }[/math] of [math]\displaystyle{ A \cdot B }[/math] is the constant term of the identity series, i.e. it is 1. This condition is also sufficient; we may compute the coefficients of the inverse series [math]\displaystyle{ B }[/math] via the explicit recursive formula
An important special case is that the geometric series formula is valid in [math]\displaystyle{ RX }[/math]:
If [math]\displaystyle{ R=K }[/math] is a field, then a series is invertible if and only if the constant term is non-zero, i.e. if and only if the series is not divisible by [math]\displaystyle{ X }[/math]. This means that [math]\displaystyle{ KX }[/math] is a discrete valuation ring with uniformizing parameter [math]\displaystyle{ X }[/math].
The computation of a quotient [math]\displaystyle{ f/g=h }[/math]
assuming the denominator is invertible (that is, [math]\displaystyle{ a_0 }[/math] is invertible in the ring of scalars), can be performed as a product [math]\displaystyle{ f }[/math] and the inverse of [math]\displaystyle{ g }[/math], or directly equating the coefficients in [math]\displaystyle{ f=gh }[/math]:
The coefficient extraction operator applied to a formal power series
in X is written
and extracts the coefficient of Xm, so that
Given formal power series
one may form the composition
where the coefficients cn are determined by "expanding out" the powers of f(X):
Here the sum is extended over all (k, j) with [math]\displaystyle{ k\in\N }[/math] and [math]\displaystyle{ j\in\N_+^k }[/math] with [math]\displaystyle{ |j|:=j_1+\cdots+j_k=n. }[/math]
A more explicit description of these coefficients is provided by Faà di Bruno's formula, at least in the case where the coefficient ring is a field of characteristic 0.
Composition is only valid when [math]\displaystyle{ f(X) }[/math] has no constant term, so that each [math]\displaystyle{ c_n }[/math] depends on only a finite number of coefficients of [math]\displaystyle{ f(X) }[/math] and [math]\displaystyle{ g(X) }[/math]. In other words, the series for [math]\displaystyle{ g(f(X)) }[/math] converges in the topology of [math]\displaystyle{ RX }[/math].
Assume that the ring [math]\displaystyle{ R }[/math] has characteristic 0 and the nonzero integers are invertible in [math]\displaystyle{ R }[/math]. If we denote by [math]\displaystyle{ \exp(X) }[/math] the formal power series
then the expression
makes perfect sense as a formal power series. However, the statement
is not a valid application of the composition operation for formal power series. Rather, it is confusing the notions of convergence in [math]\displaystyle{ RX }[/math] and convergence in [math]\displaystyle{ R }[/math]; indeed, the ring [math]\displaystyle{ R }[/math] may not even contain any number [math]\displaystyle{ e }[/math] with the appropriate properties.
Whenever a formal series
has f0 = 0 and f1 being an invertible element of R, there exists a series
that is the composition inverse of [math]\displaystyle{ f }[/math], meaning that composing [math]\displaystyle{ f }[/math] with [math]\displaystyle{ g }[/math] gives the series representing the identity function [math]\displaystyle{ x = 0 + 1x + 0x^2+ 0x^3+\cdots }[/math]. The coefficients of [math]\displaystyle{ g }[/math] may be found recursively by using the above formula for the coefficients of a composition, equating them with those of the composition identity X (that is 1 at degree 1 and 0 at every degree greater than 1). In the case when the coefficient ring is a field of characteristic 0, the Lagrange inversion formula (discussed below) provides a powerful tool to compute the coefficients of g, as well as the coefficients of the (multiplicative) powers of g.
Given a formal power series
we define its formal derivative, denoted Df or f ′, by
The symbol D is called the formal differentiation operator. This definition simply mimics term-by-term differentiation of a polynomial.
This operation is R-linear:
for any a, b in R and any f, g in [math]\displaystyle{ RX. }[/math] Additionally, the formal derivative has many of the properties of the usual derivative of calculus. For example, the product rule is valid:
and the chain rule works as well:
whenever the appropriate compositions of series are defined (see above under composition of series).
Thus, in these respects formal power series behave like Taylor series. Indeed, for the f defined above, we find that
where Dk denotes the kth formal derivative (that is, the result of formally differentiating k times).
If [math]\displaystyle{ R }[/math] is a ring with characteristic zero and the nonzero integers are invertible in [math]\displaystyle{ R }[/math], then given a formal power series
we define its formal antiderivative or formal indefinite integral by
for any constant [math]\displaystyle{ C \in R }[/math].
This operation is R-linear:
for any a, b in R and any f, g in [math]\displaystyle{ RX. }[/math] Additionally, the formal antiderivative has many of the properties of the usual antiderivative of calculus. For example, the formal antiderivative is the right inverse of the formal derivative:
for any [math]\displaystyle{ f \in RX }[/math].
[math]\displaystyle{ RX }[/math] is an associative algebra over [math]\displaystyle{ R }[/math] which contains the ring [math]\displaystyle{ R[X] }[/math] of polynomials over [math]\displaystyle{ R }[/math]; the polynomials correspond to the sequences which end in zeros.
The Jacobson radical of [math]\displaystyle{ RX }[/math] is the ideal generated by [math]\displaystyle{ X }[/math] and the Jacobson radical of [math]\displaystyle{ R }[/math]; this is implied by the element invertibility criterion discussed above.
The maximal ideals of [math]\displaystyle{ RX }[/math] all arise from those in [math]\displaystyle{ R }[/math] in the following manner: an ideal [math]\displaystyle{ M }[/math] of [math]\displaystyle{ RX }[/math] is maximal if and only if [math]\displaystyle{ M\cap R }[/math] is a maximal ideal of [math]\displaystyle{ R }[/math] and [math]\displaystyle{ M }[/math] is generated as an ideal by [math]\displaystyle{ X }[/math] and [math]\displaystyle{ M\cap R }[/math].
Several algebraic properties of [math]\displaystyle{ R }[/math] are inherited by [math]\displaystyle{ RX }[/math]:
The metric space [math]\displaystyle{ (RX, d) }[/math] is complete.
The ring [math]\displaystyle{ RX }[/math] is compact if and only if R is finite. This follows from Tychonoff's theorem and the characterisation of the topology on [math]\displaystyle{ RX }[/math] as a product topology.
The ring of formal power series with coefficients in a complete local ring satisfies the Weierstrass preparation theorem.
Formal power series can be used to solve recurrences occurring in number theory and combinatorics. For an example involving finding a closed form expression for the Fibonacci numbers, see the article on Examples of generating functions.
One can use formal power series to prove several relations familiar from analysis in a purely algebraic setting. Consider for instance the following elements of [math]\displaystyle{ \QX }[/math]:
Then one can show that
The last one being valid in the ring [math]\displaystyle{ \QX, Y. }[/math]
For K a field, the ring [math]\displaystyle{ KX 1, \ldots, X r }[/math] is often used as the "standard, most general" complete local ring over K in algebra.
Mathematical analysis → Complex analysis |
Complex analysis |
---|
Complex numbers |
Complex functions |
Basic Theory |
Geometric function theory |
People |
|
In mathematical analysis, every convergent power series defines a function with values in the real or complex numbers. Formal power series over certain special rings can also be interpreted as functions, but one has to be careful with the domain and codomain. Let
and suppose [math]\displaystyle{ S }[/math] is a commutative associative algebra over [math]\displaystyle{ R }[/math], [math]\displaystyle{ I }[/math] is an ideal in [math]\displaystyle{ S }[/math] such that the I-adic topology on [math]\displaystyle{ S }[/math] is complete, and [math]\displaystyle{ x }[/math] is an element of [math]\displaystyle{ I }[/math]. Define:
This series is guaranteed to converge in [math]\displaystyle{ S }[/math] given the above assumptions on [math]\displaystyle{ x }[/math]. Furthermore, we have
and
Unlike in the case of bona fide functions, these formulas are not definitions but have to be proved.
Since the topology on [math]\displaystyle{ RX }[/math] is the [math]\displaystyle{ (X) }[/math]-adic topology and [math]\displaystyle{ RX }[/math] is complete, we can in particular apply power series to other power series, provided that the arguments don't have constant coefficients (so that they belong to the ideal [math]\displaystyle{ (X) }[/math]): [math]\displaystyle{ f(0) }[/math], [math]\displaystyle{ f(X^2-X) }[/math] and [math]\displaystyle{ f((1-X)^{-1}-1) }[/math] are all well defined for any formal power series [math]\displaystyle{ f \in RX. }[/math]
With this formalism, we can give an explicit formula for the multiplicative inverse of a power series [math]\displaystyle{ f }[/math] whose constant coefficient [math]\displaystyle{ a=f(0) }[/math] is invertible in [math]\displaystyle{ R }[/math]:
If the formal power series [math]\displaystyle{ g }[/math] with [math]\displaystyle{ g(0)=0 }[/math] is given implicitly by the equation
where [math]\displaystyle{ f }[/math] is a known power series with [math]\displaystyle{ f(0)=0 }[/math], then the coefficients of [math]\displaystyle{ g }[/math] can be explicitly computed using the Lagrange inversion formula.
The formal Laurent series over a ring [math]\displaystyle{ R }[/math] are defined in a similar way to a formal power series, except that we also allow finitely many terms of negative degree. That is, they are the series that can be written as
for some integer [math]\displaystyle{ N }[/math], so that there are only finitely many negative [math]\displaystyle{ n }[/math] with [math]\displaystyle{ a_n \neq 0 }[/math]. (This is different from the classical Laurent series of complex analysis.) For a non-zero formal Laurent series, the minimal integer [math]\displaystyle{ n }[/math] such that [math]\displaystyle{ a_n\neq 0 }[/math] is called the order of [math]\displaystyle{ f }[/math] and is denoted [math]\displaystyle{ \operatorname{ord}(f). }[/math] (The order of the zero series is [math]\displaystyle{ +\infty }[/math].)
Multiplication of such series can be defined. Indeed, similarly to the definition for formal power series, the coefficient of [math]\displaystyle{ X^k }[/math] of two series with respective sequences of coefficients [math]\displaystyle{ \{a_n\} }[/math] and [math]\displaystyle{ \{b_n\} }[/math] is [math]\displaystyle{ \sum_{i\in\Z}a_ib_{k-i}. }[/math] This sum has only finitely many nonzero terms because of the assumed vanishing of coefficients at sufficiently negative indices.
The formal Laurent series form the ring of formal Laurent series over [math]\displaystyle{ R }[/math], denoted by [math]\displaystyle{ R((X)) }[/math].[lower-alpha 1] It is equal to the localization of the ring [math]\displaystyle{ RX }[/math] of formal power series with respect to the set of positive powers of [math]\displaystyle{ X }[/math]. If [math]\displaystyle{ R=K }[/math] is a field, then [math]\displaystyle{ K((X)) }[/math] is in fact a field, which may alternatively be obtained as the field of fractions of the integral domain [math]\displaystyle{ KX }[/math].
As with [math]\displaystyle{ RX }[/math], the ring [math]\displaystyle{ R((X)) }[/math] of formal Laurent series may be endowed with the structure of a topological ring by introducing the metric [math]\displaystyle{ d(f,g)=2^{-\operatorname{ord}(f-g)}. }[/math]
One may define formal differentiation for formal Laurent series in the natural (term-by-term) way. Precisely, the formal derivative of the formal Laurent series [math]\displaystyle{ f }[/math] above is [math]\displaystyle{ f' = Df = \sum_{n\in\Z} na_n X^{n-1}, }[/math] which is again a formal Laurent series. If [math]\displaystyle{ f }[/math] is a non-constant formal Laurent series and with coefficients in a field of characteristic 0, then one has [math]\displaystyle{ \operatorname{ord}(f')= \operatorname{ord}(f)-1. }[/math] However, in general this is not the case since the factor [math]\displaystyle{ n }[/math] for the lowest order term could be equal to 0 in [math]\displaystyle{ R }[/math].
Assume that [math]\displaystyle{ K }[/math] is a field of characteristic 0. Then the map
is a [math]\displaystyle{ K }[/math]-derivation that satisfies
The latter shows that the coefficient of [math]\displaystyle{ X^{-1} }[/math] in [math]\displaystyle{ f }[/math] is of particular interest; it is called formal residue of [math]\displaystyle{ f }[/math] and denoted [math]\displaystyle{ \operatorname{Res}(f) }[/math]. The map
is [math]\displaystyle{ K }[/math]-linear, and by the above observation one has an exact sequence
Some rules of calculus. As a quite direct consequence of the above definition, and of the rules of formal derivation, one has, for any [math]\displaystyle{ f, g\in K((X)) }[/math]
Property (i) is part of the exact sequence above. Property (ii) follows from (i) as applied to [math]\displaystyle{ (fg)'=f'g+fg' }[/math]. Property (iii): any [math]\displaystyle{ f }[/math] can be written in the form [math]\displaystyle{ f=X^mg }[/math], with [math]\displaystyle{ m=\operatorname{ord}(f) }[/math] and [math]\displaystyle{ \operatorname{ord}(g)=0 }[/math]: then [math]\displaystyle{ f'/f = mX^{-1}+g'/g. }[/math] [math]\displaystyle{ \operatorname{ord}(g)=0 }[/math] implies [math]\displaystyle{ g }[/math] is invertible in [math]\displaystyle{ KX\subset \operatorname{im}(D) = \ker(\operatorname{Res}), }[/math] whence [math]\displaystyle{ \operatorname{Res}(f'/f)=m. }[/math] Property (iv): Since [math]\displaystyle{ \operatorname{im}(D) = \ker(\operatorname{Res}), }[/math] we can write [math]\displaystyle{ g=g_{-1}X^{-1}+G', }[/math] with [math]\displaystyle{ G \in K((X)) }[/math]. Consequently, [math]\displaystyle{ (g\circ f)f'= g_{-1}f^{-1}f'+(G'\circ f)f' = g_{-1}f'/f + (G \circ f)' }[/math] and (iv) follows from (i) and (iii). Property (v) is clear from the definition.
As mentioned above, any formal series [math]\displaystyle{ f \in KX }[/math] with f0 = 0 and f1 ≠ 0 has a composition inverse [math]\displaystyle{ g \in KX. }[/math] The following relation between the coefficients of gn and f−k holds ("Lagrange inversion formula"):
In particular, for n = 1 and all k ≥ 1,
Since the proof of the Lagrange inversion formula is a very short computation, it is worth reporting it here. Noting [math]\displaystyle{ \operatorname{ord}(f) =1 }[/math], we can apply the rules of calculus above, crucially Rule (iv) substituting [math]\displaystyle{ X \rightsquigarrow f(X) }[/math], to get:
Generalizations. One may observe that the above computation can be repeated plainly in more general settings than K((X)): a generalization of the Lagrange inversion formula is already available working in the [math]\displaystyle{ \Complex((X)) }[/math]-modules [math]\displaystyle{ X^{\alpha}\Complex((X)), }[/math] where α is a complex exponent. As a consequence, if f and g are as above, with [math]\displaystyle{ f_1=g_1=1 }[/math], we can relate the complex powers of f / X and g / X: precisely, if α and β are non-zero complex numbers with negative integer sum, [math]\displaystyle{ m=-\alpha-\beta\in\N, }[/math] then
For instance, this way one finds the power series for complex powers of the Lambert function.
Formal power series in any number of indeterminates (even infinitely many) can be defined. If I is an index set and XI is the set of indeterminates Xi for i∈I, then a monomial Xα is any finite product of elements of XI (repetitions allowed); a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted [math]\displaystyle{ \sum_\alpha c_\alpha X^\alpha }[/math]. The set of all such formal power series is denoted [math]\displaystyle{ RX I, }[/math] and it is given a ring structure by defining
and
The topology on [math]\displaystyle{ RX I }[/math] is such that a sequence of its elements converges only if for each monomial Xα the corresponding coefficient stabilizes. If I is finite, then this the J-adic topology, where J is the ideal of [math]\displaystyle{ RX I }[/math] generated by all the indeterminates in XI. This does not hold if I is infinite. For example, if [math]\displaystyle{ I=\N, }[/math] then the sequence [math]\displaystyle{ (f_n)_{n\in \N} }[/math] with [math]\displaystyle{ f_n = X_n + X_{n+1} + X_{n+2} + \cdots }[/math] does not converge with respect to any J-adic topology on R, but clearly for each monomial the corresponding coefficient stabilizes.
As remarked above, the topology on a repeated formal power series ring like [math]\displaystyle{ RXY }[/math] is usually chosen in such a way that it becomes isomorphic as a topological ring to [math]\displaystyle{ RX,Y. }[/math]
All of the operations defined for series in one variable may be extended to the several variables case.
In the case of the formal derivative, there are now separate partial derivative operators, which differentiate with respect to each of the indeterminates. They all commute with each other.
In the several variables case, the universal property characterizing [math]\displaystyle{ RX 1, \ldots, X r }[/math] becomes the following. If S is a commutative associative algebra over R, if I is an ideal of S such that the I-adic topology on S is complete, and if x1, …, xr are elements of I, then there is a unique map [math]\displaystyle{ \Phi: RX 1, \ldots, X r \to S }[/math] with the following properties:
The several variable case can be further generalised by taking non-commuting variables Xi for i ∈ I, where I is an index set and then a monomial Xα is any word in the XI; a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted [math]\displaystyle{ \textstyle\sum_\alpha c_\alpha X^\alpha }[/math]. The set of all such formal power series is denoted R«XI», and it is given a ring structure by defining addition pointwise
and multiplication by
where · denotes concatenation of words. These formal power series over R form the Magnus ring over R.[3][4]
Given an alphabet [math]\displaystyle{ \Sigma }[/math] and a semiring [math]\displaystyle{ S }[/math]. The formal power series over [math]\displaystyle{ S }[/math] supported on the language [math]\displaystyle{ \Sigma^* }[/math] is denoted by [math]\displaystyle{ S\langle\langle \Sigma^*\rangle\rangle }[/math]. It consists of all mappings [math]\displaystyle{ r:\Sigma^*\to S }[/math], where [math]\displaystyle{ \Sigma^* }[/math] is the free monoid generated by the non-empty set [math]\displaystyle{ \Sigma }[/math].
The elements of [math]\displaystyle{ S\langle\langle \Sigma^*\rangle\rangle }[/math] can be written as formal sums
where [math]\displaystyle{ (r,w) }[/math] denotes the value of [math]\displaystyle{ r }[/math] at the word [math]\displaystyle{ w\in\Sigma^* }[/math]. The elements [math]\displaystyle{ (r,w)\in S }[/math] are called the coefficients of [math]\displaystyle{ r }[/math].
For [math]\displaystyle{ r\in S\langle\langle \Sigma^*\rangle\rangle }[/math] the support of [math]\displaystyle{ r }[/math] is the set
A series where every coefficient is either [math]\displaystyle{ 0 }[/math] or [math]\displaystyle{ 1 }[/math] is called the characteristic series of its support.
The subset of [math]\displaystyle{ S\langle\langle \Sigma^*\rangle\rangle }[/math] consisting of all series with a finite support is denoted by [math]\displaystyle{ S\langle \Sigma^*\rangle }[/math] and called polynomials.
For [math]\displaystyle{ r_1, r_2\in S\langle\langle \Sigma^*\rangle\rangle }[/math] and [math]\displaystyle{ s\in S }[/math], the sum [math]\displaystyle{ r_1+r_2 }[/math] is defined by
The (Cauchy) product [math]\displaystyle{ r_1\cdot r_2 }[/math] is defined by
The Hadamard product [math]\displaystyle{ r_1\odot r_2 }[/math] is defined by
And the products by a scalar [math]\displaystyle{ sr_1 }[/math] and [math]\displaystyle{ r_1s }[/math] by
With these operations [math]\displaystyle{ (S\langle\langle \Sigma^*\rangle\rangle,+,\cdot,0,\varepsilon) }[/math] and [math]\displaystyle{ (S\langle \Sigma^*\rangle, +,\cdot,0,\varepsilon) }[/math] are semirings, where [math]\displaystyle{ \varepsilon }[/math] is the empty word in [math]\displaystyle{ \Sigma^* }[/math].
These formal power series are used to model the behavior of weighted automata, in theoretical computer science, when the coefficients [math]\displaystyle{ (r,w) }[/math] of the series are taken to be the weight of a path with label [math]\displaystyle{ w }[/math] in the automata.[5]
Suppose [math]\displaystyle{ G }[/math] is an ordered abelian group, meaning an abelian group with a total ordering [math]\displaystyle{ \lt }[/math] respecting the group's addition, so that [math]\displaystyle{ a\lt b }[/math] if and only if [math]\displaystyle{ a+c\lt b+c }[/math] for all [math]\displaystyle{ c }[/math]. Let I be a well-ordered subset of [math]\displaystyle{ G }[/math], meaning I contains no infinite descending chain. Consider the set consisting of
for all such I, with [math]\displaystyle{ a_i }[/math] in a commutative ring [math]\displaystyle{ R }[/math], where we assume that for any index set, if all of the [math]\displaystyle{ a_i }[/math] are zero then the sum is zero. Then [math]\displaystyle{ R((G)) }[/math] is the ring of formal power series on [math]\displaystyle{ G }[/math]; because of the condition that the indexing set be well-ordered the product is well-defined, and we of course assume that two elements which differ by zero are the same. Sometimes the notation [math]\displaystyle{ R^G }[/math] is used to denote [math]\displaystyle{ R((G)) }[/math].[6]
Various properties of [math]\displaystyle{ R }[/math] transfer to [math]\displaystyle{ R((G)) }[/math]. If [math]\displaystyle{ R }[/math] is a field, then so is [math]\displaystyle{ R((G)) }[/math]. If [math]\displaystyle{ R }[/math] is an ordered field, we can order [math]\displaystyle{ R((G)) }[/math] by setting any element to have the same sign as its leading coefficient, defined as the least element of the index set I associated to a non-zero coefficient. Finally if [math]\displaystyle{ G }[/math] is a divisible group and [math]\displaystyle{ R }[/math] is a real closed field, then [math]\displaystyle{ R((G)) }[/math] is a real closed field, and if [math]\displaystyle{ R }[/math] is algebraically closed, then so is [math]\displaystyle{ R((G)) }[/math].
This theory is due to Hans Hahn, who also showed that one obtains subfields when the number of (non-zero) terms is bounded by some fixed infinite cardinality.
Original source: https://en.wikipedia.org/wiki/Formal power series.
Read more |