Formal power series

From HandWiki - Reading time: 31 min

Short description: Infinite sum that is considered independently from any notion of convergence

In mathematics, a formal series is an infinite sum that is considered independently from any notion of convergence, and can be manipulated with the usual algebraic operations on series (addition, subtraction, multiplication, division, partial sums, etc.).

A formal power series is a special kind of formal series, whose terms are of the form [math]\displaystyle{ a x^n }[/math] where [math]\displaystyle{ x^n }[/math] is the [math]\displaystyle{ n }[/math]th power of a variable [math]\displaystyle{ x }[/math] ([math]\displaystyle{ n }[/math] is a non-negative integer), and [math]\displaystyle{ a }[/math] is called the coefficient. Hence, power series can be viewed as a generalization of polynomials, where the number of terms is allowed to be infinite, with no requirements of convergence. Thus, the series may no longer represent a function of its variable, merely a formal sequence of coefficients, in contrast to a power series, which defines a function by taking numerical values for the variable within a radius of convergence. In a formal power series, the [math]\displaystyle{ x^n }[/math] are used only as position-holders for the coefficients, so that the coefficient of [math]\displaystyle{ x^5 }[/math] is the fifth term in the sequence. In combinatorics, the method of generating functions uses formal power series to represent numerical sequences and multisets, for instance allowing concise expressions for recursively defined sequences regardless of whether the recursion can be explicitly solved. More generally, formal power series can include series with any finite (or countable) number of variables, and with coefficients in an arbitrary ring.

Rings of formal power series are complete local rings, and this allows using calculus-like methods in the purely algebraic framework of algebraic geometry and commutative algebra. They are analogous in many ways to p-adic integers, which can be defined as formal series of the powers of p.

Introduction

A formal power series can be loosely thought of as an object that is like a polynomial, but with infinitely many terms. Alternatively, for those familiar with power series (or Taylor series), one may think of a formal power series as a power series in which we ignore questions of convergence by not assuming that the variable X denotes any numerical value (not even an unknown value). For example, consider the series

[math]\displaystyle{ A = 1 - 3X + 5X^2 - 7X^3 + 9X^4 - 11X^5 + \cdots. }[/math]

If we studied this as a power series, its properties would include, for example, that its radius of convergence is 1. However, as a formal power series, we may ignore this completely; all that is relevant is the sequence of coefficients [1, −3, 5, −7, 9, −11, ...]. In other words, a formal power series is an object that just records a sequence of coefficients. It is perfectly acceptable to consider a formal power series with the factorials [1, 1, 2, 6, 24, 120, 720, 5040, ... ] as coefficients, even though the corresponding power series diverges for any nonzero value of X.

Arithmetic on formal power series is carried out by simply pretending that the series are polynomials. For example, if

[math]\displaystyle{ B = 2X + 4X^3 + 6X^5 + \cdots, }[/math]

then we add A and B term by term:

[math]\displaystyle{ A + B = 1 - X + 5X^2 - 3X^3 + 9X^4 - 5X^5 + \cdots. }[/math]

We can multiply formal power series, again just by treating them as polynomials (see in particular Cauchy product):

[math]\displaystyle{ AB = 2X - 6X^2 + 14X^3 - 26X^4 + 44X^5 + \cdots. }[/math]

Notice that each coefficient in the product AB only depends on a finite number of coefficients of A and B. For example, the X5 term is given by

[math]\displaystyle{ 44X^5 = (1\times 6X^5) + (5X^2 \times 4X^3) + (9X^4 \times 2X). }[/math]

For this reason, one may multiply formal power series without worrying about the usual questions of absolute, conditional and uniform convergence which arise in dealing with power series in the setting of analysis.

Once we have defined multiplication for formal power series, we can define multiplicative inverses as follows. The multiplicative inverse of a formal power series A is a formal power series C such that AC = 1, provided that such a formal power series exists. It turns out that if A has a multiplicative inverse, it is unique, and we denote it by A−1. Now we can define division of formal power series by defining B/A to be the product BA−1, provided that the inverse of A exists. For example, one can use the definition of multiplication above to verify the familiar formula

[math]\displaystyle{ \frac{1}{1 + X} = 1 - X + X^2 - X^3 + X^4 - X^5 + \cdots. }[/math]

An important operation on formal power series is coefficient extraction. In its most basic form, the coefficient extraction operator [math]\displaystyle{ [X^n] }[/math] applied to a formal power series [math]\displaystyle{ A }[/math] in one variable extracts the coefficient of the [math]\displaystyle{ n }[/math]th power of the variable, so that [math]\displaystyle{ [X^2]A=5 }[/math] and [math]\displaystyle{ [X^5]A=-11 }[/math]. Other examples include

[math]\displaystyle{ \begin{align} \left[X^3\right] (B) &= 4, \\ \left[X^2 \right] (X + 3 X^2 Y^3 + 10 Y^6) &= 3Y^3, \\ \left[X^2Y^3 \right] ( X + 3 X^2 Y^3 + 10 Y^6) &= 3, \\ \left[X^n \right] \left(\frac{1}{1+X} \right) &= (-1)^n, \\ \left[X^n \right] \left(\frac{X}{(1-X)^2} \right) &= n. \end{align} }[/math]

Similarly, many other operations that are carried out on polynomials can be extended to the formal power series setting, as explained below.

The ring of formal power series

If one considers the set of all formal power series in X with coefficients in a commutative ring R, the elements of this set collectively constitute another ring which is written [math]\displaystyle{ RX, }[/math] and called the ring of formal power series in the variable X over R.

Definition of the formal power series ring

One can characterize [math]\displaystyle{ RX }[/math] abstractly as the completion of the polynomial ring [math]\displaystyle{ R[X] }[/math] equipped with a particular metric. This automatically gives [math]\displaystyle{ RX }[/math] the structure of a topological ring (and even of a complete metric space). But the general construction of a completion of a metric space is more involved than what is needed here, and would make formal power series seem more complicated than they are. It is possible to describe [math]\displaystyle{ RX }[/math] more explicitly, and define the ring structure and topological structure separately, as follows.

Ring structure

As a set, [math]\displaystyle{ RX }[/math] can be constructed as the set [math]\displaystyle{ R^\N }[/math] of all infinite sequences of elements of [math]\displaystyle{ R }[/math], indexed by the natural numbers (taken to include 0). Designating a sequence whose term at index [math]\displaystyle{ n }[/math] is [math]\displaystyle{ a_n }[/math] by [math]\displaystyle{ (a_n) }[/math], one defines addition of two such sequences by

[math]\displaystyle{ (a_n)_{n\in\N} + (b_n)_{n\in\N} = \left( a_n + b_n \right)_{n\in\N} }[/math]

and multiplication by

[math]\displaystyle{ (a_n)_{n\in\N} \times (b_n)_{n\in\N} = \left( \sum_{k=0}^n a_k b_{n-k} \right)_{\!n\in\N}. }[/math]

This type of product is called the Cauchy product of the two sequences of coefficients, and is a sort of discrete convolution. With these operations, [math]\displaystyle{ R^\N }[/math] becomes a commutative ring with zero element [math]\displaystyle{ (0,0,0,\ldots) }[/math] and multiplicative identity [math]\displaystyle{ (1,0,0,\ldots) }[/math].

The product is in fact the same one used to define the product of polynomials in one indeterminate, which suggests using a similar notation. One embeds [math]\displaystyle{ R }[/math] into [math]\displaystyle{ RX }[/math] by sending any (constant) [math]\displaystyle{ a \in R }[/math] to the sequence [math]\displaystyle{ (a,0,0,\ldots) }[/math] and designates the sequence [math]\displaystyle{ (0,1,0,0,\ldots) }[/math] by [math]\displaystyle{ X }[/math]; then using the above definitions every sequence with only finitely many nonzero terms can be expressed in terms of these special elements as

[math]\displaystyle{ (a_0, a_1, a_2, \ldots, a_n, 0, 0, \ldots) = a_0 + a_1 X + \cdots + a_n X^n = \sum_{i=0}^n a_i X^i; }[/math]

these are precisely the polynomials in [math]\displaystyle{ X }[/math]. Given this, it is quite natural and convenient to designate a general sequence [math]\displaystyle{ (a_n)_{n\in\N} }[/math] by the formal expression [math]\displaystyle{ \textstyle\sum_{i\in\N}a_i X^i }[/math], even though the latter is not an expression formed by the operations of addition and multiplication defined above (from which only finite sums can be constructed). This notational convention allows reformulation of the above definitions as

[math]\displaystyle{ \left(\sum_{i\in\N} a_i X^i\right)+\left(\sum_{i\in\N} b_i X^i\right) = \sum_{i\in\N}(a_i+b_i) X^i }[/math]

and

[math]\displaystyle{ \left(\sum_{i\in\N} a_i X^i\right) \times \left(\sum_{i\in\N} b_i X^i\right) = \sum_{n\in\N} \left(\sum_{k=0}^n a_k b_{n-k}\right) X^n. }[/math]

which is quite convenient, but one must be aware of the distinction between formal summation (a mere convention) and actual addition.

Topological structure

Having stipulated conventionally that

[math]\displaystyle{ (a_0, a_1, a_2, a_3, \ldots) = \sum_{i=0}^\infty a_i X^i, }[/math]

 

 

 

 

(1)

one would like to interpret the right hand side as a well-defined infinite summation. To that end, a notion of convergence in [math]\displaystyle{ R^\N }[/math] is defined and a topology on [math]\displaystyle{ R^\N }[/math] is constructed. There are several equivalent ways to define the desired topology.

  • We may give [math]\displaystyle{ R^\N }[/math] the product topology, where each copy of [math]\displaystyle{ R }[/math] is given the discrete topology.
  • We may give [math]\displaystyle{ R^\N }[/math] the I-adic topology, where [math]\displaystyle{ I=(X) }[/math] is the ideal generated by [math]\displaystyle{ X }[/math], which consists of all sequences whose first term [math]\displaystyle{ a_0 }[/math] is zero.
  • The desired topology could also be derived from the following metric. The distance between distinct sequences [math]\displaystyle{ (a_n), (b_n) \in R^{\N}, }[/math] is defined to be [math]\displaystyle{ d((a_n), (b_n)) = 2^{-k}, }[/math] where [math]\displaystyle{ k }[/math] is the smallest natural number such that [math]\displaystyle{ a_k\neq b_k }[/math]; the distance between two equal sequences is of course zero.

Informally, two sequences [math]\displaystyle{ (a_n) }[/math] and [math]\displaystyle{ (b_n) }[/math] become closer and closer if and only if more and more of their terms agree exactly. Formally, the sequence of partial sums of some infinite summation converges if for every fixed power of [math]\displaystyle{ X }[/math] the coefficient stabilizes: there is a point beyond which all further partial sums have the same coefficient. This is clearly the case for the right hand side of (1), regardless of the values [math]\displaystyle{ a_n }[/math], since inclusion of the term for [math]\displaystyle{ i=n }[/math] gives the last (and in fact only) change to the coefficient of [math]\displaystyle{ X^n }[/math]. It is also obvious that the limit of the sequence of partial sums is equal to the left hand side.

This topological structure, together with the ring operations described above, form a topological ring. This is called the ring of formal power series over [math]\displaystyle{ R }[/math] and is denoted by [math]\displaystyle{ RX }[/math]. The topology has the useful property that an infinite summation converges if and only if the sequence of its terms converges to 0, which just means that any fixed power of [math]\displaystyle{ X }[/math] occurs in only finitely many terms.

The topological structure allows much more flexible usage of infinite summations. For instance the rule for multiplication can be restated simply as

[math]\displaystyle{ \left(\sum_{i\in\N} a_i X^i\right) \times \left(\sum_{i\in\N} b_i X^i\right) = \sum_{i,j\in\N} a_i b_j X^{i+j}, }[/math]

since only finitely many terms on the right affect any fixed [math]\displaystyle{ X^n }[/math]. Infinite products are also defined by the topological structure; it can be seen that an infinite product converges if and only if the sequence of its factors converges to 1 (in which case the product is nonzero) or infinitely many factors have no constant term (in which case the product is zero).

Alternative topologies

The above topology is the finest topology for which

[math]\displaystyle{ \sum_{i=0}^\infty a_i X^i }[/math]

always converges as a summation to the formal power series designated by the same expression, and it often suffices to give a meaning to infinite sums and products, or other kinds of limits that one wishes to use to designate particular formal power series. It can however happen occasionally that one wishes to use a coarser topology, so that certain expressions become convergent that would otherwise diverge. This applies in particular when the base ring [math]\displaystyle{ R }[/math] already comes with a topology other than the discrete one, for instance if it is also a ring of formal power series.

In the ring of formal power series [math]\displaystyle{ \ZXY }[/math], the topology of above construction only relates to the indeterminate [math]\displaystyle{ Y }[/math], since the topology that was put on [math]\displaystyle{ \ZX }[/math] has been replaced by the discrete topology when defining the topology of the whole ring. So

[math]\displaystyle{ \sum_{i = 0}^\infty XY^i }[/math]

converges (and its sum can be written as [math]\displaystyle{ \tfrac{X}{1-Y} }[/math]); however

[math]\displaystyle{ \sum_{i = 0}^\infty X^i Y }[/math]

would be considered to be divergent, since every term affects the coefficient of [math]\displaystyle{ Y }[/math]. This asymmetry disappears if the power series ring in [math]\displaystyle{ Y }[/math] is given the product topology where each copy of [math]\displaystyle{ \ZX }[/math] is given its topology as a ring of formal power series rather than the discrete topology. With this topology, a sequence of elements of [math]\displaystyle{ \ZXY }[/math] converges if the coefficient of each power of [math]\displaystyle{ Y }[/math] converges to a formal power series in [math]\displaystyle{ X }[/math], a weaker condition than stabilizing entirely. For instance, with this topology, in the second example given above, the coefficient of [math]\displaystyle{ Y }[/math]converges to [math]\displaystyle{ \tfrac{1}{1-X} }[/math], so the whole summation converges to [math]\displaystyle{ \tfrac{Y}{1-X} }[/math].

This way of defining the topology is in fact the standard one for repeated constructions of rings of formal power series, and gives the same topology as one would get by taking formal power series in all indeterminates at once. In the above example that would mean constructing [math]\displaystyle{ \ZX,Y }[/math] and here a sequence converges if and only if the coefficient of every monomial [math]\displaystyle{ X^iY^j }[/math] stabilizes. This topology, which is also the [math]\displaystyle{ I }[/math]-adic topology, where [math]\displaystyle{ I=(X,Y) }[/math] is the ideal generated by [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math], still enjoys the property that a summation converges if and only if its terms tend to 0.

The same principle could be used to make other divergent limits converge. For instance in [math]\displaystyle{ \RX }[/math] the limit

[math]\displaystyle{ \lim_{n\to\infty}\left(1+\frac{X}{n}\right)^{\!n} }[/math]

does not exist, so in particular it does not converge to

[math]\displaystyle{ \exp(X) = \sum_{n\in\N}\frac{X^n}{n!}. }[/math]

This is because for [math]\displaystyle{ i\geq 2 }[/math] the coefficient [math]\displaystyle{ \tbinom{n}{i}/n^i }[/math] of [math]\displaystyle{ X^i }[/math] does not stabilize as [math]\displaystyle{ n\to \infty }[/math]. It does however converge in the usual topology of [math]\displaystyle{ \R }[/math], and in fact to the coefficient [math]\displaystyle{ \tfrac{1}{i!} }[/math] of [math]\displaystyle{ \exp(X) }[/math]. Therefore, if one would give [math]\displaystyle{ \RX }[/math] the product topology of [math]\displaystyle{ \R^\N }[/math] where the topology of [math]\displaystyle{ \R }[/math] is the usual topology rather than the discrete one, then the above limit would converge to [math]\displaystyle{ \exp(X) }[/math]. This more permissive approach is not however the standard when considering formal power series, as it would lead to convergence considerations that are as subtle as they are in analysis, while the philosophy of formal power series is on the contrary to make convergence questions as trivial as they can possibly be. With this topology it would not be the case that a summation converges if and only if its terms tend to 0.

Universal property

The ring [math]\displaystyle{ RX }[/math] may be characterized by the following universal property. If [math]\displaystyle{ S }[/math] is a commutative associative algebra over [math]\displaystyle{ R }[/math], if [math]\displaystyle{ I }[/math] is an ideal of [math]\displaystyle{ S }[/math] such that the [math]\displaystyle{ I }[/math]-adic topology on [math]\displaystyle{ S }[/math] is complete, and if [math]\displaystyle{ x }[/math] is an element of [math]\displaystyle{ I }[/math], then there is a unique [math]\displaystyle{ \Phi: RX\to S }[/math] with the following properties:

  • [math]\displaystyle{ \Phi }[/math] is an [math]\displaystyle{ R }[/math]-algebra homomorphism
  • [math]\displaystyle{ \Phi }[/math] is continuous
  • [math]\displaystyle{ \Phi(X)=x }[/math].

Operations on formal power series

One can perform algebraic operations on power series to generate new power series.[1][2] Besides the ring structure operations defined above, we have the following.

Power series raised to powers

For any natural number n we have [math]\displaystyle{ \left( \sum_{k=0}^\infty a_k X^k \right)^{\!n} =\, \sum_{m=0}^\infty c_m X^m, }[/math] where [math]\displaystyle{ \begin{align} c_0 &= a_0^n,\\ c_m &= \frac{1}{m a_0} \sum_{k=1}^m (kn - m+k) a_{k} c_{m-k}, \ \ \ m \geq 1. \end{align} }[/math]

(This formula can only be used if m and a0 are invertible in the ring of coefficients.)

In the case of formal power series with complex coefficients, the complex powers are well defined at least for series f with constant term equal to 1. In this case, [math]\displaystyle{ f^{\alpha} }[/math] can be defined either by composition with the binomial series (1+x)α, or by composition with the exponential and the logarithmic series, [math]\displaystyle{ f^{\alpha} = \exp(\alpha\log(f)), }[/math] or as the solution of the differential equation [math]\displaystyle{ f( f^{\alpha})' = \alpha f^{\alpha} f' }[/math] with constant term 1, the three definitions being equivalent. The rules of calculus [math]\displaystyle{ (f^\alpha)^\beta = f^{\alpha\beta} }[/math] and [math]\displaystyle{ f^\alpha g^\alpha = (fg)^\alpha }[/math] easily follow.

Multiplicative inverse

The series

[math]\displaystyle{ A = \sum_{n=0}^\infty a_n X^n \in RX }[/math]

is invertible in [math]\displaystyle{ RX }[/math] if and only if its constant coefficient [math]\displaystyle{ a_0 }[/math] is invertible in [math]\displaystyle{ R }[/math]. This condition is necessary, for the following reason: if we suppose that [math]\displaystyle{ A }[/math] has an inverse [math]\displaystyle{ B = b_0 + b_1 x + \cdots }[/math] then the constant term [math]\displaystyle{ a_0b_0 }[/math] of [math]\displaystyle{ A \cdot B }[/math] is the constant term of the identity series, i.e. it is 1. This condition is also sufficient; we may compute the coefficients of the inverse series [math]\displaystyle{ B }[/math] via the explicit recursive formula

[math]\displaystyle{ \begin{align} b_0 &= \frac{1}{a_0},\\ b_n &= -\frac{1}{a_0} \sum_{i=1}^n a_i b_{n-i}, \ \ \ n \geq 1. \end{align} }[/math]

An important special case is that the geometric series formula is valid in [math]\displaystyle{ RX }[/math]:

[math]\displaystyle{ (1 - X)^{-1} = \sum_{n=0}^\infty X^n. }[/math]

If [math]\displaystyle{ R=K }[/math] is a field, then a series is invertible if and only if the constant term is non-zero, i.e. if and only if the series is not divisible by [math]\displaystyle{ X }[/math]. This means that [math]\displaystyle{ KX }[/math] is a discrete valuation ring with uniformizing parameter [math]\displaystyle{ X }[/math].

Division

The computation of a quotient [math]\displaystyle{ f/g=h }[/math]

[math]\displaystyle{ \frac{\sum_{n=0}^\infty b_n X^n }{\sum_{n=0}^\infty a_n X^n } =\sum_{n=0}^\infty c_n X^n, }[/math]

assuming the denominator is invertible (that is, [math]\displaystyle{ a_0 }[/math] is invertible in the ring of scalars), can be performed as a product [math]\displaystyle{ f }[/math] and the inverse of [math]\displaystyle{ g }[/math], or directly equating the coefficients in [math]\displaystyle{ f=gh }[/math]:

[math]\displaystyle{ c_n = \frac{1}{a_0}\left(b_n - \sum_{k=1}^n a_k c_{n-k}\right). }[/math]

Extracting coefficients

The coefficient extraction operator applied to a formal power series

[math]\displaystyle{ f(X) = \sum_{n=0}^\infty a_n X^n }[/math]

in X is written

[math]\displaystyle{ \left[ X^m \right] f(X) }[/math]

and extracts the coefficient of Xm, so that

[math]\displaystyle{ \left[ X^m \right] f(X) = \left[ X^m \right] \sum_{n=0}^\infty a_n X^n = a_m. }[/math]

Composition

Given formal power series

[math]\displaystyle{ f(X) = \sum_{n=1}^\infty a_n X^n = a_1 X + a_2 X^2 + \cdots }[/math]
[math]\displaystyle{ g(X) = \sum_{n=0}^\infty b_n X^n = b_0 + b_1 X + b_2 X^2 + \cdots, }[/math]

one may form the composition

[math]\displaystyle{ g(f(X)) = \sum_{n=0}^\infty b_n (f(X))^n = \sum_{n=0}^\infty c_n X^n, }[/math]

where the coefficients cn are determined by "expanding out" the powers of f(X):

[math]\displaystyle{ c_n:=\sum_{k\in\N, |j|=n} b_k a_{j_1} a_{j_2} \cdots a_{j_k}. }[/math]

Here the sum is extended over all (k, j) with [math]\displaystyle{ k\in\N }[/math] and [math]\displaystyle{ j\in\N_+^k }[/math] with [math]\displaystyle{ |j|:=j_1+\cdots+j_k=n. }[/math]

A more explicit description of these coefficients is provided by Faà di Bruno's formula, at least in the case where the coefficient ring is a field of characteristic 0.

Composition is only valid when [math]\displaystyle{ f(X) }[/math] has no constant term, so that each [math]\displaystyle{ c_n }[/math] depends on only a finite number of coefficients of [math]\displaystyle{ f(X) }[/math] and [math]\displaystyle{ g(X) }[/math]. In other words, the series for [math]\displaystyle{ g(f(X)) }[/math] converges in the topology of [math]\displaystyle{ RX }[/math].

Example

Assume that the ring [math]\displaystyle{ R }[/math] has characteristic 0 and the nonzero integers are invertible in [math]\displaystyle{ R }[/math]. If we denote by [math]\displaystyle{ \exp(X) }[/math] the formal power series

[math]\displaystyle{ \exp(X) = 1 + X + \frac{X^2}{2!} + \frac{X^3}{3!} + \frac{X^4}{4!} + \cdots, }[/math]

then the expression

[math]\displaystyle{ \exp(\exp(X) - 1) = 1 + X + X^2 + \frac{5X^3}6 + \frac{5X^4}8 + \cdots }[/math]

makes perfect sense as a formal power series. However, the statement

[math]\displaystyle{ \exp(\exp(X)) \ \stackrel?=\ e \exp(\exp(X) - 1) \ =\ e + eX + eX^2 + \frac{5eX^3}{6} + \cdots }[/math]

is not a valid application of the composition operation for formal power series. Rather, it is confusing the notions of convergence in [math]\displaystyle{ RX }[/math] and convergence in [math]\displaystyle{ R }[/math]; indeed, the ring [math]\displaystyle{ R }[/math] may not even contain any number [math]\displaystyle{ e }[/math] with the appropriate properties.

Composition inverse

Whenever a formal series

[math]\displaystyle{ f(X)=\sum_k f_k X^k \in RX }[/math]

has f0 = 0 and f1 being an invertible element of R, there exists a series

[math]\displaystyle{ g(X)=\sum_k g_k X^k }[/math]

that is the composition inverse of [math]\displaystyle{ f }[/math], meaning that composing [math]\displaystyle{ f }[/math] with [math]\displaystyle{ g }[/math] gives the series representing the identity function [math]\displaystyle{ x = 0 + 1x + 0x^2+ 0x^3+\cdots }[/math]. The coefficients of [math]\displaystyle{ g }[/math] may be found recursively by using the above formula for the coefficients of a composition, equating them with those of the composition identity X (that is 1 at degree 1 and 0 at every degree greater than 1). In the case when the coefficient ring is a field of characteristic 0, the Lagrange inversion formula (discussed below) provides a powerful tool to compute the coefficients of g, as well as the coefficients of the (multiplicative) powers of g.

Formal differentiation

Given a formal power series

[math]\displaystyle{ f = \sum_{n\geq 0} a_n X^n \in RX, }[/math]

we define its formal derivative, denoted Df or f ′, by

[math]\displaystyle{ Df = f' = \sum_{n \geq 1} a_n n X^{n-1}. }[/math]

The symbol D is called the formal differentiation operator. This definition simply mimics term-by-term differentiation of a polynomial.

This operation is R-linear:

[math]\displaystyle{ D(af + bg) = a \cdot Df + b \cdot Dg }[/math]

for any a, b in R and any f, g in [math]\displaystyle{ RX. }[/math] Additionally, the formal derivative has many of the properties of the usual derivative of calculus. For example, the product rule is valid:

[math]\displaystyle{ D(fg) \ =\ f \cdot (Dg) + (Df) \cdot g, }[/math]

and the chain rule works as well:

[math]\displaystyle{ D(f\circ g ) = ( Df\circ g ) \cdot Dg, }[/math]

whenever the appropriate compositions of series are defined (see above under composition of series).

Thus, in these respects formal power series behave like Taylor series. Indeed, for the f defined above, we find that

[math]\displaystyle{ (D^k f)(0) = k! a_k, }[/math]

where Dk denotes the kth formal derivative (that is, the result of formally differentiating k times).

Formal antidifferentiation

If [math]\displaystyle{ R }[/math] is a ring with characteristic zero and the nonzero integers are invertible in [math]\displaystyle{ R }[/math], then given a formal power series

[math]\displaystyle{ f = \sum_{n\geq 0} a_n X^n \in RX, }[/math]

we define its formal antiderivative or formal indefinite integral by

[math]\displaystyle{ D^{-1} f = \int f\ dX = C + \sum_{n \geq 0} a_n \frac{X^{n+1}}{n+1}. }[/math]

for any constant [math]\displaystyle{ C \in R }[/math].

This operation is R-linear:

[math]\displaystyle{ D^{-1}(af + bg) = a \cdot D^{-1}f + b \cdot D^{-1}g }[/math]

for any a, b in R and any f, g in [math]\displaystyle{ RX. }[/math] Additionally, the formal antiderivative has many of the properties of the usual antiderivative of calculus. For example, the formal antiderivative is the right inverse of the formal derivative:

[math]\displaystyle{ D(D^{-1}(f)) = f }[/math]

for any [math]\displaystyle{ f \in RX }[/math].

Properties

Algebraic properties of the formal power series ring

[math]\displaystyle{ RX }[/math] is an associative algebra over [math]\displaystyle{ R }[/math] which contains the ring [math]\displaystyle{ R[X] }[/math] of polynomials over [math]\displaystyle{ R }[/math]; the polynomials correspond to the sequences which end in zeros.

The Jacobson radical of [math]\displaystyle{ RX }[/math] is the ideal generated by [math]\displaystyle{ X }[/math] and the Jacobson radical of [math]\displaystyle{ R }[/math]; this is implied by the element invertibility criterion discussed above.

The maximal ideals of [math]\displaystyle{ RX }[/math] all arise from those in [math]\displaystyle{ R }[/math] in the following manner: an ideal [math]\displaystyle{ M }[/math] of [math]\displaystyle{ RX }[/math] is maximal if and only if [math]\displaystyle{ M\cap R }[/math] is a maximal ideal of [math]\displaystyle{ R }[/math] and [math]\displaystyle{ M }[/math] is generated as an ideal by [math]\displaystyle{ X }[/math] and [math]\displaystyle{ M\cap R }[/math].

Several algebraic properties of [math]\displaystyle{ R }[/math] are inherited by [math]\displaystyle{ RX }[/math]:

  • if [math]\displaystyle{ R }[/math] is a local ring, then so is [math]\displaystyle{ RX }[/math] (with the set of non units the unique maximal ideal),
  • if [math]\displaystyle{ R }[/math] is Noetherian, then so is [math]\displaystyle{ RX }[/math] (a version of the Hilbert basis theorem),
  • if [math]\displaystyle{ R }[/math] is an integral domain, then so is [math]\displaystyle{ RX }[/math], and
  • if [math]\displaystyle{ K }[/math] is a field, then [math]\displaystyle{ KX }[/math] is a discrete valuation ring.

Topological properties of the formal power series ring

The metric space [math]\displaystyle{ (RX, d) }[/math] is complete.

The ring [math]\displaystyle{ RX }[/math] is compact if and only if R is finite. This follows from Tychonoff's theorem and the characterisation of the topology on [math]\displaystyle{ RX }[/math] as a product topology.

Weierstrass preparation

The ring of formal power series with coefficients in a complete local ring satisfies the Weierstrass preparation theorem.

Applications

Formal power series can be used to solve recurrences occurring in number theory and combinatorics. For an example involving finding a closed form expression for the Fibonacci numbers, see the article on Examples of generating functions.

One can use formal power series to prove several relations familiar from analysis in a purely algebraic setting. Consider for instance the following elements of [math]\displaystyle{ \QX }[/math]:

[math]\displaystyle{ \sin(X) := \sum_{n \ge 0} \frac{(-1)^n} {(2n+1)!} X^{2n+1} }[/math]
[math]\displaystyle{ \cos(X) := \sum_{n \ge 0} \frac{(-1)^n} {(2n)!} X^{2n} }[/math]

Then one can show that

[math]\displaystyle{ \sin^2(X) + \cos^2(X) = 1, }[/math]
[math]\displaystyle{ \frac{\partial}{\partial X} \sin(X) = \cos(X), }[/math]
[math]\displaystyle{ \sin (X+Y) = \sin(X) \cos(Y) + \cos(X) \sin(Y). }[/math]

The last one being valid in the ring [math]\displaystyle{ \QX, Y. }[/math]

For K a field, the ring [math]\displaystyle{ KX 1, \ldots, X r }[/math] is often used as the "standard, most general" complete local ring over K in algebra.

Interpreting formal power series as functions

In mathematical analysis, every convergent power series defines a function with values in the real or complex numbers. Formal power series over certain special rings can also be interpreted as functions, but one has to be careful with the domain and codomain. Let

[math]\displaystyle{ f = \sum a_n X^n \in RX, }[/math]

and suppose [math]\displaystyle{ S }[/math] is a commutative associative algebra over [math]\displaystyle{ R }[/math], [math]\displaystyle{ I }[/math] is an ideal in [math]\displaystyle{ S }[/math] such that the I-adic topology on [math]\displaystyle{ S }[/math] is complete, and [math]\displaystyle{ x }[/math] is an element of [math]\displaystyle{ I }[/math]. Define:

[math]\displaystyle{ f(x) = \sum_{n\ge 0} a_n x^n. }[/math]

This series is guaranteed to converge in [math]\displaystyle{ S }[/math] given the above assumptions on [math]\displaystyle{ x }[/math]. Furthermore, we have

[math]\displaystyle{ (f+g)(x) = f(x) + g(x) }[/math]

and

[math]\displaystyle{ (fg)(x) = f(x) g(x). }[/math]

Unlike in the case of bona fide functions, these formulas are not definitions but have to be proved.

Since the topology on [math]\displaystyle{ RX }[/math] is the [math]\displaystyle{ (X) }[/math]-adic topology and [math]\displaystyle{ RX }[/math] is complete, we can in particular apply power series to other power series, provided that the arguments don't have constant coefficients (so that they belong to the ideal [math]\displaystyle{ (X) }[/math]): [math]\displaystyle{ f(0) }[/math], [math]\displaystyle{ f(X^2-X) }[/math] and [math]\displaystyle{ f((1-X)^{-1}-1) }[/math] are all well defined for any formal power series [math]\displaystyle{ f \in RX. }[/math]

With this formalism, we can give an explicit formula for the multiplicative inverse of a power series [math]\displaystyle{ f }[/math] whose constant coefficient [math]\displaystyle{ a=f(0) }[/math] is invertible in [math]\displaystyle{ R }[/math]:

[math]\displaystyle{ f^{-1} = \sum_{n \ge 0} a^{-n-1} (a-f)^n. }[/math]

If the formal power series [math]\displaystyle{ g }[/math] with [math]\displaystyle{ g(0)=0 }[/math] is given implicitly by the equation

[math]\displaystyle{ f(g) =X }[/math]

where [math]\displaystyle{ f }[/math] is a known power series with [math]\displaystyle{ f(0)=0 }[/math], then the coefficients of [math]\displaystyle{ g }[/math] can be explicitly computed using the Lagrange inversion formula.

Generalizations

Formal Laurent series

The formal Laurent series over a ring [math]\displaystyle{ R }[/math] are defined in a similar way to a formal power series, except that we also allow finitely many terms of negative degree. That is, they are the series that can be written as

[math]\displaystyle{ f = \sum_{n = N}^\infty a_n X^n }[/math]

for some integer [math]\displaystyle{ N }[/math], so that there are only finitely many negative [math]\displaystyle{ n }[/math] with [math]\displaystyle{ a_n \neq 0 }[/math]. (This is different from the classical Laurent series of complex analysis.) For a non-zero formal Laurent series, the minimal integer [math]\displaystyle{ n }[/math] such that [math]\displaystyle{ a_n\neq 0 }[/math] is called the order of [math]\displaystyle{ f }[/math] and is denoted [math]\displaystyle{ \operatorname{ord}(f). }[/math] (The order of the zero series is [math]\displaystyle{ +\infty }[/math].)

Multiplication of such series can be defined. Indeed, similarly to the definition for formal power series, the coefficient of [math]\displaystyle{ X^k }[/math] of two series with respective sequences of coefficients [math]\displaystyle{ \{a_n\} }[/math] and [math]\displaystyle{ \{b_n\} }[/math] is [math]\displaystyle{ \sum_{i\in\Z}a_ib_{k-i}. }[/math] This sum has only finitely many nonzero terms because of the assumed vanishing of coefficients at sufficiently negative indices.

The formal Laurent series form the ring of formal Laurent series over [math]\displaystyle{ R }[/math], denoted by [math]\displaystyle{ R((X)) }[/math].[lower-alpha 1] It is equal to the localization of the ring [math]\displaystyle{ RX }[/math] of formal power series with respect to the set of positive powers of [math]\displaystyle{ X }[/math]. If [math]\displaystyle{ R=K }[/math] is a field, then [math]\displaystyle{ K((X)) }[/math] is in fact a field, which may alternatively be obtained as the field of fractions of the integral domain [math]\displaystyle{ KX }[/math].

As with [math]\displaystyle{ RX }[/math], the ring [math]\displaystyle{ R((X)) }[/math] of formal Laurent series may be endowed with the structure of a topological ring by introducing the metric [math]\displaystyle{ d(f,g)=2^{-\operatorname{ord}(f-g)}. }[/math]

One may define formal differentiation for formal Laurent series in the natural (term-by-term) way. Precisely, the formal derivative of the formal Laurent series [math]\displaystyle{ f }[/math] above is [math]\displaystyle{ f' = Df = \sum_{n\in\Z} na_n X^{n-1}, }[/math] which is again a formal Laurent series. If [math]\displaystyle{ f }[/math] is a non-constant formal Laurent series and with coefficients in a field of characteristic 0, then one has [math]\displaystyle{ \operatorname{ord}(f')= \operatorname{ord}(f)-1. }[/math] However, in general this is not the case since the factor [math]\displaystyle{ n }[/math] for the lowest order term could be equal to 0 in [math]\displaystyle{ R }[/math].

Formal residue

Assume that [math]\displaystyle{ K }[/math] is a field of characteristic 0. Then the map

[math]\displaystyle{ D\colon K((X))\to K((X)) }[/math]

is a [math]\displaystyle{ K }[/math]-derivation that satisfies

[math]\displaystyle{ \ker D=K }[/math]
[math]\displaystyle{ \operatorname{im} D= \left \{f\in K((X)) : [X^{-1}]f=0 \right \}. }[/math]

The latter shows that the coefficient of [math]\displaystyle{ X^{-1} }[/math] in [math]\displaystyle{ f }[/math] is of particular interest; it is called formal residue of [math]\displaystyle{ f }[/math] and denoted [math]\displaystyle{ \operatorname{Res}(f) }[/math]. The map

[math]\displaystyle{ \operatorname{Res} : K((X))\to K }[/math]

is [math]\displaystyle{ K }[/math]-linear, and by the above observation one has an exact sequence

[math]\displaystyle{ 0 \to K \to K((X)) \overset{D}{\longrightarrow} K((X)) \;\overset{\operatorname{Res}}{\longrightarrow}\; K \to 0. }[/math]

Some rules of calculus. As a quite direct consequence of the above definition, and of the rules of formal derivation, one has, for any [math]\displaystyle{ f, g\in K((X)) }[/math]

  1. [math]\displaystyle{ \operatorname{Res}(f')=0; }[/math]
  2. [math]\displaystyle{ \operatorname{Res}(fg')=-\operatorname{Res}(f'g); }[/math]
  3. [math]\displaystyle{ \operatorname{Res}(f'/f)=\operatorname{ord}(f),\qquad \forall f\neq 0; }[/math]
  4. [math]\displaystyle{ \operatorname{Res}\left(( g\circ f) f'\right) = \operatorname{ord}(f)\operatorname{Res}(g), }[/math] if [math]\displaystyle{ \operatorname{ord}(f)\gt 0; }[/math]
  5. [math]\displaystyle{ [X^n]f(X)=\operatorname{Res}\left(X^{-n-1}f(X)\right). }[/math]

Property (i) is part of the exact sequence above. Property (ii) follows from (i) as applied to [math]\displaystyle{ (fg)'=f'g+fg' }[/math]. Property (iii): any [math]\displaystyle{ f }[/math] can be written in the form [math]\displaystyle{ f=X^mg }[/math], with [math]\displaystyle{ m=\operatorname{ord}(f) }[/math] and [math]\displaystyle{ \operatorname{ord}(g)=0 }[/math]: then [math]\displaystyle{ f'/f = mX^{-1}+g'/g. }[/math] [math]\displaystyle{ \operatorname{ord}(g)=0 }[/math] implies [math]\displaystyle{ g }[/math] is invertible in [math]\displaystyle{ KX\subset \operatorname{im}(D) = \ker(\operatorname{Res}), }[/math] whence [math]\displaystyle{ \operatorname{Res}(f'/f)=m. }[/math] Property (iv): Since [math]\displaystyle{ \operatorname{im}(D) = \ker(\operatorname{Res}), }[/math] we can write [math]\displaystyle{ g=g_{-1}X^{-1}+G', }[/math] with [math]\displaystyle{ G \in K((X)) }[/math]. Consequently, [math]\displaystyle{ (g\circ f)f'= g_{-1}f^{-1}f'+(G'\circ f)f' = g_{-1}f'/f + (G \circ f)' }[/math] and (iv) follows from (i) and (iii). Property (v) is clear from the definition.

The Lagrange inversion formula

Main page: Lagrange inversion theorem

As mentioned above, any formal series [math]\displaystyle{ f \in KX }[/math] with f0 = 0 and f1 ≠ 0 has a composition inverse [math]\displaystyle{ g \in KX. }[/math] The following relation between the coefficients of gn and fk holds ("Lagrange inversion formula"):

[math]\displaystyle{ k[X^k] g^n=n[X^{-n}]f^{-k}. }[/math]

In particular, for n = 1 and all k ≥ 1,

[math]\displaystyle{ [X^k] g=\frac{1}{k} \operatorname{Res}\left( f^{-k}\right). }[/math]

Since the proof of the Lagrange inversion formula is a very short computation, it is worth reporting it here. Noting [math]\displaystyle{ \operatorname{ord}(f) =1 }[/math], we can apply the rules of calculus above, crucially Rule (iv) substituting [math]\displaystyle{ X \rightsquigarrow f(X) }[/math], to get:

[math]\displaystyle{ \begin{align} k[X^k] g^n & \ \stackrel{\mathrm{(v)}}=\ k\operatorname{Res}\left( g^n X^{-k-1} \right) \ \stackrel{\mathrm{(iv)}}=\ k\operatorname{Res}\left(X^n f^{-k-1}f'\right) \ \stackrel{\mathrm{chain}}=\ -\operatorname{Res}\left(X^n (f^{-k})'\right) \\ & \ \stackrel{\mathrm{(ii)}}=\ \operatorname{Res}\left(\left(X^n\right)' f^{-k}\right) \ \stackrel{\mathrm{chain}}=\ n\operatorname{Res}\left(X^{n-1}f^{-k}\right) \ \stackrel{\mathrm{(v)}}=\ n[X^{-n}]f^{-k}. \end{align} }[/math]

Generalizations. One may observe that the above computation can be repeated plainly in more general settings than K((X)): a generalization of the Lagrange inversion formula is already available working in the [math]\displaystyle{ \Complex((X)) }[/math]-modules [math]\displaystyle{ X^{\alpha}\Complex((X)), }[/math] where α is a complex exponent. As a consequence, if f and g are as above, with [math]\displaystyle{ f_1=g_1=1 }[/math], we can relate the complex powers of f / X and g / X: precisely, if α and β are non-zero complex numbers with negative integer sum, [math]\displaystyle{ m=-\alpha-\beta\in\N, }[/math] then

[math]\displaystyle{ \frac{1}{\alpha}[X^m]\left( \frac{f}{X} \right)^\alpha=-\frac{1}{\beta}[X^m]\left( \frac{g}{X} \right)^\beta. }[/math]

For instance, this way one finds the power series for complex powers of the Lambert function.

Power series in several variables

Formal power series in any number of indeterminates (even infinitely many) can be defined. If I is an index set and XI is the set of indeterminates Xi for iI, then a monomial Xα is any finite product of elements of XI (repetitions allowed); a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted [math]\displaystyle{ \sum_\alpha c_\alpha X^\alpha }[/math]. The set of all such formal power series is denoted [math]\displaystyle{ RX I, }[/math] and it is given a ring structure by defining

[math]\displaystyle{ \left(\sum_\alpha c_\alpha X^\alpha\right)+\left(\sum_\alpha d_\alpha X^\alpha \right)= \sum_\alpha (c_\alpha+d_\alpha) X^\alpha }[/math]

and

[math]\displaystyle{ \left(\sum_\alpha c_\alpha X^\alpha\right)\times\left(\sum_\beta d_\beta X^\beta\right)=\sum_{\alpha,\beta} c_\alpha d_\beta X^{\alpha+\beta} }[/math]

Topology

The topology on [math]\displaystyle{ RX I }[/math] is such that a sequence of its elements converges only if for each monomial Xα the corresponding coefficient stabilizes. If I is finite, then this the J-adic topology, where J is the ideal of [math]\displaystyle{ RX I }[/math] generated by all the indeterminates in XI. This does not hold if I is infinite. For example, if [math]\displaystyle{ I=\N, }[/math] then the sequence [math]\displaystyle{ (f_n)_{n\in \N} }[/math] with [math]\displaystyle{ f_n = X_n + X_{n+1} + X_{n+2} + \cdots }[/math] does not converge with respect to any J-adic topology on R, but clearly for each monomial the corresponding coefficient stabilizes.

As remarked above, the topology on a repeated formal power series ring like [math]\displaystyle{ RXY }[/math] is usually chosen in such a way that it becomes isomorphic as a topological ring to [math]\displaystyle{ RX,Y. }[/math]

Operations

All of the operations defined for series in one variable may be extended to the several variables case.

  • A series is invertible if and only if its constant term is invertible in R.
  • The composition f(g(X)) of two series f and g is defined if f is a series in a single indeterminate, and the constant term of g is zero. For a series f in several indeterminates a form of "composition" can similarly be defined, with as many separate series in the place of g as there are indeterminates.

In the case of the formal derivative, there are now separate partial derivative operators, which differentiate with respect to each of the indeterminates. They all commute with each other.

Universal property

In the several variables case, the universal property characterizing [math]\displaystyle{ RX 1, \ldots, X r }[/math] becomes the following. If S is a commutative associative algebra over R, if I is an ideal of S such that the I-adic topology on S is complete, and if x1, …, xr are elements of I, then there is a unique map [math]\displaystyle{ \Phi: RX 1, \ldots, X r \to S }[/math] with the following properties:

  • Φ is an R-algebra homomorphism
  • Φ is continuous
  • Φ(Xi) = xi for i = 1, …, r.

Non-commuting variables

The several variable case can be further generalised by taking non-commuting variables Xi for iI, where I is an index set and then a monomial Xα is any word in the XI; a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted [math]\displaystyle{ \textstyle\sum_\alpha c_\alpha X^\alpha }[/math]. The set of all such formal power series is denoted R«XI», and it is given a ring structure by defining addition pointwise

[math]\displaystyle{ \left(\sum_\alpha c_\alpha X^\alpha\right)+\left(\sum_\alpha d_\alpha X^\alpha\right)=\sum_\alpha(c_\alpha+d_\alpha)X^\alpha }[/math]

and multiplication by

[math]\displaystyle{ \left(\sum_\alpha c_\alpha X^\alpha\right)\times\left(\sum_\alpha d_\alpha X^\alpha\right)=\sum_{\alpha,\beta} c_\alpha d_\beta X^{\alpha} \cdot X^{\beta} }[/math]

where · denotes concatenation of words. These formal power series over R form the Magnus ring over R.[3][4]

On a semiring

Given an alphabet [math]\displaystyle{ \Sigma }[/math] and a semiring [math]\displaystyle{ S }[/math]. The formal power series over [math]\displaystyle{ S }[/math] supported on the language [math]\displaystyle{ \Sigma^* }[/math] is denoted by [math]\displaystyle{ S\langle\langle \Sigma^*\rangle\rangle }[/math]. It consists of all mappings [math]\displaystyle{ r:\Sigma^*\to S }[/math], where [math]\displaystyle{ \Sigma^* }[/math] is the free monoid generated by the non-empty set [math]\displaystyle{ \Sigma }[/math].

The elements of [math]\displaystyle{ S\langle\langle \Sigma^*\rangle\rangle }[/math] can be written as formal sums

[math]\displaystyle{ r = \sum_{w \in \Sigma^*} (r,w)w. }[/math]

where [math]\displaystyle{ (r,w) }[/math] denotes the value of [math]\displaystyle{ r }[/math] at the word [math]\displaystyle{ w\in\Sigma^* }[/math]. The elements [math]\displaystyle{ (r,w)\in S }[/math] are called the coefficients of [math]\displaystyle{ r }[/math].

For [math]\displaystyle{ r\in S\langle\langle \Sigma^*\rangle\rangle }[/math] the support of [math]\displaystyle{ r }[/math] is the set

[math]\displaystyle{ \operatorname{supp}(r)=\{w\in\Sigma^*|\ (r,w)\neq 0\} }[/math]

A series where every coefficient is either [math]\displaystyle{ 0 }[/math] or [math]\displaystyle{ 1 }[/math] is called the characteristic series of its support.

The subset of [math]\displaystyle{ S\langle\langle \Sigma^*\rangle\rangle }[/math] consisting of all series with a finite support is denoted by [math]\displaystyle{ S\langle \Sigma^*\rangle }[/math] and called polynomials.

For [math]\displaystyle{ r_1, r_2\in S\langle\langle \Sigma^*\rangle\rangle }[/math] and [math]\displaystyle{ s\in S }[/math], the sum [math]\displaystyle{ r_1+r_2 }[/math] is defined by

[math]\displaystyle{ (r_1+r_2,w)=(r_1,w)+(r_2,w) }[/math]

The (Cauchy) product [math]\displaystyle{ r_1\cdot r_2 }[/math] is defined by

[math]\displaystyle{ (r_1\cdot r_2,w) = \sum_{w_1w_2=w}(r_1,w_1)(r_2,w_2) }[/math]

The Hadamard product [math]\displaystyle{ r_1\odot r_2 }[/math] is defined by

[math]\displaystyle{ (r_1\odot r_2,w)=(r_1,w)(r_2,w) }[/math]

And the products by a scalar [math]\displaystyle{ sr_1 }[/math] and [math]\displaystyle{ r_1s }[/math] by

[math]\displaystyle{ (sr_1,w)=s(r_1,w) }[/math] and [math]\displaystyle{ (r_1s,w)=(r_1,w)s }[/math], respectively.

With these operations [math]\displaystyle{ (S\langle\langle \Sigma^*\rangle\rangle,+,\cdot,0,\varepsilon) }[/math] and [math]\displaystyle{ (S\langle \Sigma^*\rangle, +,\cdot,0,\varepsilon) }[/math] are semirings, where [math]\displaystyle{ \varepsilon }[/math] is the empty word in [math]\displaystyle{ \Sigma^* }[/math].

These formal power series are used to model the behavior of weighted automata, in theoretical computer science, when the coefficients [math]\displaystyle{ (r,w) }[/math] of the series are taken to be the weight of a path with label [math]\displaystyle{ w }[/math] in the automata.[5]

Replacing the index set by an ordered abelian group

Main page: Hahn series

Suppose [math]\displaystyle{ G }[/math] is an ordered abelian group, meaning an abelian group with a total ordering [math]\displaystyle{ \lt }[/math] respecting the group's addition, so that [math]\displaystyle{ a\lt b }[/math] if and only if [math]\displaystyle{ a+c\lt b+c }[/math] for all [math]\displaystyle{ c }[/math]. Let I be a well-ordered subset of [math]\displaystyle{ G }[/math], meaning I contains no infinite descending chain. Consider the set consisting of

[math]\displaystyle{ \sum_{i \in I} a_i X^i }[/math]

for all such I, with [math]\displaystyle{ a_i }[/math] in a commutative ring [math]\displaystyle{ R }[/math], where we assume that for any index set, if all of the [math]\displaystyle{ a_i }[/math] are zero then the sum is zero. Then [math]\displaystyle{ R((G)) }[/math] is the ring of formal power series on [math]\displaystyle{ G }[/math]; because of the condition that the indexing set be well-ordered the product is well-defined, and we of course assume that two elements which differ by zero are the same. Sometimes the notation [math]\displaystyle{ R^G }[/math] is used to denote [math]\displaystyle{ R((G)) }[/math].[6]

Various properties of [math]\displaystyle{ R }[/math] transfer to [math]\displaystyle{ R((G)) }[/math]. If [math]\displaystyle{ R }[/math] is a field, then so is [math]\displaystyle{ R((G)) }[/math]. If [math]\displaystyle{ R }[/math] is an ordered field, we can order [math]\displaystyle{ R((G)) }[/math] by setting any element to have the same sign as its leading coefficient, defined as the least element of the index set I associated to a non-zero coefficient. Finally if [math]\displaystyle{ G }[/math] is a divisible group and [math]\displaystyle{ R }[/math] is a real closed field, then [math]\displaystyle{ R((G)) }[/math] is a real closed field, and if [math]\displaystyle{ R }[/math] is algebraically closed, then so is [math]\displaystyle{ R((G)) }[/math].

This theory is due to Hans Hahn, who also showed that one obtains subfields when the number of (non-zero) terms is bounded by some fixed infinite cardinality.

Examples and related topics

  • Bell series are used to study the properties of multiplicative arithmetic functions
  • Formal groups are used to define an abstract group law using formal power series
  • Puiseux series are an extension of formal Laurent series, allowing fractional exponents
  • Rational series

See also

  • Ring of restricted power series

Notes

  1. For each nonzero formal Laurent series, the order is an integer (that is, the degrees of the terms are bounded below). But the ring [math]\displaystyle{ R((X)) }[/math] contains series of all orders.

References

  1. Zwillinger, Daniel; Moll, Victor Hugo, eds (2015). "0.313" (in en). Table of Integrals, Series, and Products (8 ed.). Academic Press, Inc.. p. 18. ISBN 978-0-12-384933-5.  (Several previous editions as well.)
  2. Niven, Ivan (October 1969). "Formal Power Series". American Mathematical Monthly 76 (8): 871–889. doi:10.1080/00029890.1969.12000359. 
  3. Koch, Helmut (1997). Algebraic Number Theory. Encycl. Math. Sci.. 62 (2nd printing of 1st ed.). Springer-Verlag. p. 167. ISBN 978-3-540-63003-6. 
  4. Moran, Siegfried (1983). The Mathematical Theory of Knots and Braids: An Introduction. North-Holland Mathematics Studies. 82. Elsevier. p. 211. ISBN 978-0-444-86714-8. 
  5. Droste, M., & Kuich, W. (2009). Semirings and Formal Power Series. Handbook of Weighted Automata, 3–28. doi:10.1007/978-3-642-01492-5_1, p. 12
  6. Shamseddine, Khodr; Berz, Martin (2010). "Analysis on the Levi-Civita Field: A Brief Overview". Contemporary Mathematics 508: 215–237. doi:10.1090/conm/508/10002. ISBN 9780821847404. http://www.physics.umanitoba.ca/~khodr/Publications/RS-Overview-offprints.pdf. 
  • Berstel, Jean; Reutenauer, Christophe (2011). Noncommutative rational series with applications. Encyclopedia of Mathematics and Its Applications. 137. Cambridge: Cambridge University Press. ISBN 978-0-521-19022-0. 
  • Nicolas Bourbaki: Algebra, IV, §4. Springer-Verlag 1988.

Further reading

  • W. Kuich. Semirings and formal power series: Their relevance to formal languages and automata theory. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 1, Chapter 9, pages 609–677. Springer, Berlin, 1997, ISBN:3-540-60420-0
  • Droste, M., & Kuich, W. (2009). Semirings and Formal Power Series. Handbook of Weighted Automata, 3–28. doi:10.1007/978-3-642-01492-5_1
  • Arto Salomaa (1990). "Formal Languages and Power Series". in Jan van Leeuwen. Formal Models and Semantics. Handbook of Theoretical Computer Science. B. Elsevier. pp. 103–132. ISBN 0-444-88074-7. 




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Formal_power_series
6 views | Status: cached on July 31 2024 21:05:10
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF