Indefinite sum

From HandWiki - Reading time: 17 min

Short description: Inverse of a finite difference

In the calculus of finite differences, the indefinite sum operator (also known as the antidifference operator), denoted by x or Δ1,[1][2] is the linear operator that is the inverse of the forward difference operator, Δ:

Δf(x)=f(x+1)f(x).

It relates to the forward difference operator as the indefinite integral relates to the derivative. In the same way that the indefinite integral solves for the family of functions that differentiate to f, the indefinite sum solves for which family of functions have f as their forward difference.

If xf(x)=F(x), then F satisfies the functional equation

F(x+1)F(x)=f(x).

Applying the forward difference operator to an indefinite sum returns the original function:[3]

Δxf(x)=f(x).

In the notation xf(x), the variable x plays the same role as the index variable in a discrete sum; it indicates the argument at which the antidifference is to be evaluated. The subscript x acts as a placeholder, analogous to the k in k=0x1f(k), and specifies that the antidifference F is a function of x.

The solution F(x) is not unique: if F(x) is one solution, then for any 1‑periodic function C(x) (i.e., C(x+1)=C(x)), the function F(x)+C(x) is also a solution. Therefore, an indefinite sum is unique up to a 1-periodic function C(x) instead of up to a constant C as the indefinite integral is.

To obtain the unique solution up to a constant C, one must impose additional constraints. The Nørlund principal solution is the unique analytic solution that has the minimal possible exponential type, filtering out any non‑constant periodic component.[4]

Forward and backward difference conventions

A comparison of the indefinite sum operators to their discrete counterparts. The inverse backward difference of x is shown in yellow, and the inverse forward difference of x is shown in blue (both with respect to x).

The inverse forward difference operator, Δ1, extends the summation up to x1, typically starting with the iterator at 0:

k=0x1f(k).

Some authors analytically extend summation for which the upper limit is the argument without a shift, typically starting the iterator at 1:[5][6][7]

k=1xf(k).

In this case, the analytic continuation, F(x), for the sum is a solution of 1f(x). Stated explicitly, that is:

 F(x)F(x1)=f(x),

Which follows from the discrete counterpart:

k=1xf(k)k=1x1f(k)=f(x).

Some authors use the equivalent form called the telescoping equation:[8]

F(x+1)F(x)=f(x+1).

The lower bounds of the discrete analog for both inverse forward difference and inverse backward difference can be an arbitrary constant other than those listed here, as it is absorbed into the height of the 1-periodic or constant term C.

Fundamental theorem of the calculus of finite differences

Indefinite sums can be used to calculate definite sums with the formula:[9]

k=abf(k)=Δ1f(b+1)Δ1f(a).

Alternatively, using the inverse backward difference operator, the relation is:

k=abf(k)=1f(b)1f(a1).

Examples

The following basic indefinite sums follow from the fundamental properties of the difference operator, where C(x) represents an arbitrary 1-periodic function (or a constant if the Nørlund principal solution is assumed):[10]

Constant:
xc=cx+C(x)
Exponential:
xax=axa1+C(x)a1
Logarithm:
xlnx=lnΓ(x)+C(x)

Falling factorials

Falling factorials provide the discrete analog of the power rule from differential calculus. In infinitesimal calculus, ddxxn=nxn1. In the calculus of finite differences, the falling factorial

(x)n=xn_=x(x1)(x2)(xn+1)=Γ(x+1)Γ(xn+1)

plays the role of xn, and the forward difference operator satisfies

Δ(x)n=n(x)n1.

The indefinite sum of a falling factorial is given by the discrete analog of the power rule for integration:

x(x)n=(x)n+1n+1+C(x),n1.

Equivalently, using the Gamma function:

xΓ(x+1)Γ(xn+1)=Γ(x+1)(n+1)Γ(xn)+C(x),n1.

For the case where n=1, the solution is the digamma function with a shift, ψ(x+1)+C(x), which naturally extends the harmonic numbers.

Example: Sum of the first x squares. Using k2=(k)2+(k)1 and the indefinite sum formula above,

kk2=(k)33+(k)22+C(k).

Applying the fundamental theorem of the calculus of finite differences,

k=0xk2=((k)33+(k)22)|0x+1=((x+1)33+(x+1)22)((0)33+(0)22)=(x+1)33+(x+1)22.

Expanding the falling factorials,

(x+1)3=(x+1)x(x1),(x+1)2=(x+1)x,

and simplifying yields the formula

k=0xk2=x(x+12)(x+1)3.

Summation by parts

Indefinite summation by parts is the discrete analog of integration by parts. It is derived from the product rule for the forward difference operator.

Product rule. For two functions u(x) and v(x), the product rule for the forward difference is:

Δ(u(x)v(x))=u(x)Δv(x)+v(x+1)Δu(x).

Introducing the shift operator E, defined by Ef(x)=f(x+1), this can be written more compactly as:

Δ(uv)=uΔv+EvΔu.

Summation by parts. Rearranging the product rule gives:

u(x)Δv(x)=Δ(u(x)v(x))v(x+1)Δu(x).

Taking the indefinite sum of both sides and using the fact that xΔF(x)=F(x)+C(x) (where C(x) is an arbitrary 1‑periodic function) yields the formula for summation by parts:[11][10]

xu(x)Δv(x)=u(x)v(x)xv(x+1)Δu(x)+C(x).

A symmetrical form, also obtained from the product rule, is:

xf(x)Δg(x)+xg(x)Δf(x)=f(x)g(x)xΔf(x)Δg(x)+C(x).

Definite summation by parts. For definite sums from a to b, the formula becomes:

k=abu(k)Δv(k)=[u(b+1)v(b+1)u(a)v(a)]k=abv(k+1)Δu(k).

Example: product of a polynomial and an exponential[12]

Summation by parts is effective for functions like k2k. To find the indefinite sum kk2k, let u(k)=k and Δv(k)=2k. Then:

  • Δu(k)=(k+1)k=1
  • v(k)=k2k=2k21=2k
  • Ev(k)=v(k+1)=2k+1

Applying the summation by parts formula:

kk2k=k2kk2k+11+C(k).

The remaining sum is elementary:

k2k+1=2k2k=22k=2k+1.

Hence the indefinite sum (antidifference) is

F(k):=kk2k=k2k2k+1+C(k)=(k2)2k+C(k).

To evaluate the definite sum from 0 to x, we use the fundamental theorem with the forward difference inverse:

k=0xk2k=F(x+1)F(0).

Substituting the expression for F:

k=0xk2k=[(x+12)2x+1][(02)20]=(x1)2x+1(2)=(x1)2x+1+2.

Thus, for any non‑negative integer x,

k=0xk2k=(x1)2x+1+2.

Uniqueness of the principal solution

File:Finite Difference Visualisation One Periodic Vanishing.webm The functional equation F(x+1)F(x)=f(x) does not have a unique solution. If F1(x) is a particular solution, then for any function C(x) satisfying C(x+1)=C(x) (i.e., any 1-periodic function), the function F2(x)=F1(x)+C(x) is also a solution. Therefore, the indefinite sum operator defines a family of functions differing by an arbitrary 1-periodic component, C(x).

To select the unique principal solution (German: Hauptlösung)[4] up to an additive constant C (instead of up to the additive 1-periodic function C(x)) one must impose additional constraints.

Complex analysis (exponential type)

Following the theory developed by Niels Erik Nørlund,[4] the indefinite sum can be uniquely determined for analytic functions by imposing restriction on their growth in the complex plane. Specifically, by imposing minimal growth, the non-constant periodic terms can be filtered out.

Suppose f(z) is analytic in a vertical strip containing the real axis, and let F(z) be an analytic solution of F(z+1)F(z)=f(z) in that strip. To ensure uniqueness, require F(z) to be of minimal growth, specifically to be of exponential type less than 2π in the imaginary direction. That is, there exist constants M>0 and ϵ>0 such that |F(z)|Me(2πϵ)|(z)| as |(z)|.[13][14]

Let F1(z) and F2(z) be two analytic solutions satisfying this growth condition. Their difference C(z)=F1(z)F2(z) is then analytic, 1‑periodic (i.e., C(z+1)=C(z)), and inherits the same exponential type less than 2π.

Nørlund uses a fundamental result in complex analysis (related to Carlson's theorem, the Phragmén–Lindelöf principle, and the Paley–Wiener theorem) which states that a non‑constant periodic entire function must have exponential type at least 2π.[4] This follows from its Fourier series expansion: if C(z) is non‑constant, its Fourier series contains a term ane2πinz with n0, which has type 2π|n|2π. Since C(z) has type strictly less than 2π, it cannot contain any such term and therefore must be constant.

The exponential type less than 2π in the imaginary direction on f condition is sufficient but not strictly necessary. Nørlund's general definition of the principal solution is the analytic solution F having the minimal possible exponential type for the given f.[4] If f has exponential type k in imaginary direction, then the principal solution F(z) will also have type k in that strip, provided it converges. For example, f(z)=sin(7z) has exponential type 7; its principal solution exists and has type 7, even though 7>2π.

When f has exponential type exactly 2πn for some non‑zero integer n in every strip where it is analytic (e.g. f(z)=sin(2πnz) has type 2πn; its antidifference contains sin(πn)=0 in the denominator) the principal solution fails to exist (or is undefined everywhere) because it resonates with the kernel of the difference operator. In all other cases (i.e., when f is meromorphic and its exponential type in some vertical strip is not an integer multiple of 2π) the principal solution exists and is uniquely determined by minimal exponential type.

Real analysis (higher‑order convexity)

In real analysis, the uniqueness condition can be given using higher‑order convexity, generalizing the Bohr-Mollerup theorem. For an integer p0, a function is called p-convex if its divided differences of order p are non‑negative, and p-concave if those divided differences are non-positive. A function is called eventually p-convex (resp. eventually p-concave) if there exists M>0 such that it is p-convex (resp. p-concave) on the interval (M,).

Marichal and Zenaïdi proved the following uniqueness theorem, their method requiring the solution to be eventually p-convex or p-concave.[15][16]

Theorem. Let p0 be an integer and let g:+ satisfy limnΔpg(n)=0. If f:+ is an eventually p-convex or eventually p-concave solution of Δf=g, then f is uniquely determined up to an additive constant. Moreover, for any x>0,

f(x)=f(1)+limn(k=1n1g(k)k=0n1g(x+k)+j=1p(xj)Δj1g(n)),

and the convergence is uniform on bounded subsets of +.

Müller–Schleicher axiomatic method

In their paper How to Add a Noninteger Number of Terms[5], Müller and Schleicher introduced an axiomatic approach to fractional summation with a real or complex number of terms. Their method extends the classical discrete sum

k=1xf(k)

to non-integer and complex upper limits x. The definition is built upon six natural axioms:

  1. Continued Summation: ν=xyf(ν)+ν=y+1zf(ν)=ν=xzf(ν).
  2. Translation Invariance: ν=x+sy+sf(ν)=ν=xyf(ν+s).
  3. Linearity: ν=xy(λf(ν)+μg(ν))=λν=xyf(ν)+μν=xyg(ν).
  4. Empty Sum Condition: ν=11f(ν)=f(1) (equivalent to the empty sum condition).
  5. Holomorphy for Monomials: for each d, zν=1zνd is holomorphic in .
  6. Right-Shift Continuity: if f(z+n)0 pointwise as n+, then ν=xyf(ν+n)0; more generally, if f(z+n) can be approximated by polynomials pn(z+n) of fixed degree with |f(z+n)pn(z+n)|0, then:
|ν=xyf(ν+n)ν=xypn(ν+n)|0.

Axioms S1–S4 force the sum to align with the ordinary finite sum when the limits are integers. Axiom S5 forces monomials to behave the same way under the generalization of fractional sums. Axiom S6 is the crucial axiom which allows one to "step back" the asymptotic region to determine the fractional sum in a finite interval. The exact conditions for the method to work are, as stated in the Definition 1.2 of the paper:

Let U and σ{}. A function f:U will be called fractional summable of degree σ if the following conditions are satisfied:

  • x+1U for all xU;
  • there exists a sequence of polynomials (pn)n of fixed degree σ such that for all xU
|f(n+x)pn(n+x)|0 as n+
  • for every x,y+1U, the limit
limn(ν=n+xn+ypn(ν)+ν=1n(f(ν+x1)f(ν+y))),

exists.

In the simplest case when f(t)0 as t (i.e., the approximating polynomials are zero), this reduces to:

1f(x)=k=1xf(k)=n=1(f(n)f(n+x))+C

Symmetry of the principal solution

Following directly from uniqueness, if f(z) is a meromorphic function, one can define a unique analytic solution of the backward difference sum, by imposing the conditions that:

  • Difference Equation: F(x)F(x1)=f(x)
  • Normalization: F(0)=0 (empty sum boundary condition).
  • Growth constraint: F(z) has the minimal possible exponential type in the imaginary direction.

Under these conditions, F(z) satisfies a reflection formula (referred to by Nørlund as Ergänzungssatz, a complementary theorem to uniqueness of the principal solution [Hauptlösung], presenting it as G(xω|ω)=G(x|ω),F(xω|ω)=F(x|ω) where ω is the span).[17]

Image of the inverse backward difference of x, where f(x)=x is a simple example odd function. In the real plane, the point symmetry appears as a line symmetry about negative 1 half.

Odd functions

If f(z) is an odd function (f(z)=f(z)), the unique analytic solution F(z) satisfies:[17]

F(z)=F(1z).

This represents a point symmetry about z=1/2.

Even functions

If f(z) is an even function (f(z)=f(z)), the unique analytic solution F(z) satisfies:[17]

F(z)+F(1z)=F(1).

Relationship to indefinite products

In the symbolic method developed by Niels Erik Nørlund and L. M. Milne-Thomson, the indefinite product operator x serves as the multiplicative analog to the indefinite sum. It is defined by the first order homogeneous equation F(x+1)=f(x)F(x).

By taking the logarithm of the product formula, one obtains the telescoping identity ΔlnF(x)=lnf(x).[18] This allows any indefinite product to be expressed through an indefinite sum:

xf(x)=ϖ(x)exp(xlnf(x)),

where ϖ(x) is an arbitrary periodic function of period 1.[19] Conversely, an indefinite sum may be represented as the logarithm of an indefinite product:

xf(x)=ln(xexp(f(x)))+C(x).

Expansions and definitions

Newton series

For an entire function of exponential type less than ln(2)[20] the inverse forward difference operator, Δ1f(x), can be expressed by its Newton series expansion: [21][22]

xf(x)=k=1(xk)Δk1f(0)+C(x)=k=1Δk1f(0)k!(x)k+C(x).
(x)k=Γ(x+1)Γ(xk+1) is the falling factorial.

Bernoulli‑operator series expansion

Formally, the inverse forward difference operator can be expressed in terms of the derivative operator D=ddx using the exponential generating function of the Bernoulli numbers:[23][24][25]

Δ1=1eD1=v=0Bvv!Dv1,

where Bv are the Bernoulli numbers defined by the generating function tet1=v=0Bvtvv!. Under this convention B1=12.

If f is a polynomial, only finitely many terms of the series are non-zero as the finite difference of a monomial is a polynomial of one degree lower (following by induction, finitely many terms are required). For f(x)=xn one obtains the antidifference:[24]

xxn=Bn+1(x)n+1+C(x),

where Bn(x) are the Bernoulli polynomials of the first order.[24]

If f admits a Maclaurin series expansion f(x)=n=0f(n)(0)n!xn, the antidifference of monomials in the series expansion yields the formal series:[25]

xf(x)=n=1f(n1)(0)n!Bn(x)+C(x).

For non‑polynomials this expansion is generally asymptotic.

Relation to the inverse backward difference

If one instead expands the inverse backward difference operator, 1=eDeD1 (which extends k=1xf(k)), it admits to the same expansion, but with B1=+12 in place of B1=12.

Euler–Maclaurin formula

The Euler–Maclaurin formula extends 1f(x)=k=1xf(k):[6][13] 1f(x)=1xf(t)dt+f(1)+f(x)2+k=1pB2k(2k)!(f(2k1)(x)f(2k1)(1))+Rp+C(x) where B2k are the even Bernoulli numbers, p is an arbitrary positive integer, and Rp is the remainder term given by:

Rp=(1)p+11xf(p)(t)Pp(t)p!dt,

with Pp(t)=Bp(tt) being the periodized Bernoulli function related to the Bernoulli polynomials.

Laplace summation (Gregory summation formula)

Laplace's summation formula, closely related to the Gregory summation formula, can be seen as the discrete counterpart to the Euler–Maclaurin formula. The inverse forward difference Δ1f(x):[26][27][12][28]

xf(x)=0xf(t)dtk=1ckk!Δk1f(x)+C(x)
where ck=01(x)kdx are the Cauchy numbers of the first kind.
(x)k=Γ(x+1)Γ(xk+1) is the falling factorial.

Truncating the series after n terms leaves a remainder that can be expressed as an integral of f(n) times a periodic Bernoulli polynomial.[12][28] In the notation of Charles Jordan, Gregory's formula is:[12]

x=azf(x)=azf(x)dxm=1nbm[Δm1f(z)Δm1f(a)]bn(za)Δnf(ξ),a<ξ<z, where the coefficients bm are the Bernoulli numbers of the second kind. Note the argument is without a shift, aligning with the inverse backward difference.

Abel–Plana formula

The indefinite sum 1f(x)=k=1xf(k) can be analytically continued by applying the standard Abel-Plana formula to the finite sum k=1nf(k) and then analytically continuing the integer limit n to the variable x. This yields the formula:[7] 1f(x)=1xf(t)dt+f(1)+f(x)2+i0(f(xit)f(1it))(f(x+it)f(1+it))e2πt1dt+C(x)

This analytic continuation is valid when the conditions for the original formula are met. The sufficient conditions are:[13][14]

  1. Analyticity: f(z) must be analytic in the closed vertical strip between (z)=1 and (z)=(x). The formula provides the analytic solution up to, but not beyond, the nearest singularities of f to the line (z)=1.
  2. Growth: f(z) must be of exponential type less than 2π in this strip, satisfying |f(z)|Me(2πϵ)|(z)| for some M>0, ϵ>0 as |(z)|.

Choice of the constant term

The constant C is often fixed using integral conditions, which is consistent with Bernoulli polynomials.

Let F(x)=1f(x)+C. Then, the constant C is fixed from the condition 10F(x)dx=0 or 01F(x)dx=0.

For example, 10x(x+1)2dx=112 where 1x=x(x+1)2+C(x).

Let F(x)=Δ1f(x)+C. Then, the constant C is fixed from the condition 01F(x)dx=0 or 12F(x)dx=0.

Alternatively, Ramanujan summation can be used: x1f(x)=f(0)F(0). Or at 1: x1f(x)=F(1) respectively.[29][30]

See also

  • Indefinite product
  • Time scale calculus
  • List of derivatives and integrals in alternative calculi

References

  1. Man, Yiu-Kwong (1993), "On computing closed forms for indefinite summations", Journal of Symbolic Computation 16 (4): 355–376, doi:10.1006/jsco.1993.1053 
  2. Goldberg, Samuel (1986). Introduction to Difference Equations, with Illustrative Examples from Economics, Psychology, and Sociology. New York: Dover Publications. p. 41. ISBN 978-0-486-65084-5. https://books.google.com/books?id=QUzNwiVpWGAC&pg=PA41. "If Y is a function whose first difference is the function y, then Y is called an indefinite sum of y and denoted by Δ1y." 
  3. Kelley, Walter G.; Peterson, Allan C. (2001). Difference Equations: An Introduction with Applications. Academic Press. p. 20. ISBN 0-12-403330-X. 
  4. 4.0 4.1 4.2 4.3 4.4 Nörlund, Niels Erik. Vorlesungen über Differenzenrechnung. Springer. pp. 40–44. ISBN 978-3-642-50514-0. https://link.springer.com/book/10.1007/978-3-642-50824-0. 
  5. 5.0 5.1 Markus Müller and Dierk Schleicher, How to Add a Noninteger Number of Terms: From Axioms to New Identities, Amer. Math. Mon. 118(2), 136-152 (2011).
  6. 6.0 6.1 Candelpergher, Bernard (2017). "Ramanujan Summation of Divergent Series". p. 3. https://univ-cotedazur.hal.science/hal-01150208/file/RamanujanSummationSpringer2.pdf. 
  7. 7.0 7.1 Candelpergher, Bernard (2017). "Ramanujan Summation of Divergent Series". p. 23. https://univ-cotedazur.hal.science/hal-01150208/file/RamanujanSummationSpringer2.pdf. 
  8. Algorithms for Nonlinear Higher Order Difference Equations, Manuel Kauers
  9. "Handbook of discrete and combinatorial mathematics", Kenneth H. Rosen, John G. Michaels, CRC Press, 1999, ISBN 0-8493-0149-1
  10. 10.0 10.1 Jordan, Charles (1960). Calculus of Finite Differences (Second ed.). New York, NY: Chelsea Publishing Company. pp. 104-107. https://archive.org/details/calculusoffinite0000unse/page/104/mode/2up. 
  11. Kelley, Walter G.; Peterson, Allan C. (2001). Difference Equations: An Introduction with Applications. Academic Press. p. 24. ISBN 0-12-403330-X. 
  12. 12.0 12.1 12.2 12.3 Jordan, Charles (1960). Calculus of Finite Differences (Second ed.). New York, NY: Chelsea Publishing Company. pp. 284-285. https://archive.org/details/calculusoffinite0000unse/page/284/mode/2up. 
  13. 13.0 13.1 13.2 "§2.10 Sums and Sequences". NIST Digital Library of Mathematical Functions. National Institute of Standards and Technology. https://dlmf.nist.gov/2.10#E2. 
  14. 14.0 14.1 Olver, Frank W. J. (1997). Asymptotics and Special Functions. A K Peters Ltd.. p. 290. ISBN 978-1-56881-069-0. 
  15. Marichal, Jean‑Luc; Zenaïdi, Naïm (2024). "A generalization of Bohr‑Mollerup's theorem for higher order convex functions: a tutorial". Aequationes Mathematicae 98 (2): 455–481. doi:10.1007/s00010-023-00968-9. 
  16. Marichal, Jean‑Luc; Zenaïdi, Naïm (2022). A Generalization of Bohr‑Mollerup's Theorem for Higher Order Convex Functions. Developments in Mathematics. 70. Springer. doi:10.1007/978-3-030-95088-0. ISBN 978-3-030-95087-3. https://link.springer.com/book/10.1007/978-3-030-95088-0. 
  17. 17.0 17.1 17.2 Nörlund, Niels Erik. Vorlesungen über Differenzenrechnung. Springer. p. 74. ISBN 978-3-642-50514-0. https://link.springer.com/book/10.1007/978-3-642-50824-0. 
  18. Nörlund, Niels Erik. Vorlesungen über Differenzenrechnung. Springer. p. 109. ISBN 978-3-642-50514-0. https://link.springer.com/book/10.1007/978-3-642-50824-0. 
  19. Milne-Thomson, L. M. (1933). The Calculus of Finite Differences. Macmillan and Co.. pp. 324–325. https://archive.org/details/calculusoffinite032017mbp/page/324/mode/2up. 
  20. Nörlund, Niels Erik. Vorlesungen über Differenzenrechnung. Springer. p. 237. ISBN 978-3-642-50514-0. https://link.springer.com/book/10.1007/978-3-642-50824-0. 
  21. Newton, Isaac, (1687). Principia, Book III, Lemma V, Case 1
  22. Iaroslav V. Blagouchine (2018). "Three notes on Ser's and Hasse's representations for the zeta-functions". Integers (Electronic Journal of Combinatorial Number Theory) 18A: 1–45. doi:10.5281/zenodo.10581385. http://math.colgate.edu/~integers/sjs3/sjs3.pdf. 
  23. Steffensen, J. F. (1950). Interpolation (2nd ed.). New York, NY: Chelsea Publishing Company. p. 192. https://archive.org/details/interpolation0000unse/page/192/mode/2up. 
  24. 24.0 24.1 24.2 Milne-Thomson, L. M. (1933). The Calculus of Finite Differences. Macmillan and Co.. pp. 139–140. https://archive.org/details/calculusoffinite032017mbp/page/139/mode/2up. 
  25. 25.0 25.1 Nörlund, Niels Erik. Vorlesungen über Differenzenrechnung. Springer. pp. 142-143. ISBN 978-3-642-50514-0. https://link.springer.com/book/10.1007/978-3-642-50824-0. 
  26. Bernoulli numbers of the second kind on Mathworld
  27. Ferraro, Giovanni (2008). The Rise and Development of the Theory of Series up to the Early 1820s. Springer Science+Business Media, LLC. p. 248. ISBN 978-0-387-73468-2. 
  28. 28.0 28.1 Milne-Thomson, L. M. (1933). The Calculus of Finite Differences. Macmillan and Co.. pp. 180-181. https://archive.org/details/calculusoffinite032017mbp/page/180/mode/2up. 
  29. Bruce C. Berndt, Ramanujan's Notebooks , Ramanujan's Theory of Divergent Series, Chapter 6, Springer-Verlag (ed.), (1939), pp. 133–149.
  30. Éric Delabaere, Ramanujan's Summation, Algorithms Seminar 2001–2002, F. Chyzak (ed.), INRIA, (2003), pp. 83–88.

Further reading




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Indefinite_sum
11 views |
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF