Differential Equation

From Britannica 11th Edition (1911)

Differential Equation, in mathematics, a relation between one or more functions and their differential coefficients. The subject is treated here in two parts: (1) an elementary introduction dealing with the more commonly recognized types of differential equations which can be solved by rule; and (2) the general theory.

Part I.—Elementary Introduction.

Of equations involving only one independent variable, x (known as ordinary differential equations), and one dependent variable, y, and containing only the first differential coefficient dy/dx (and therefore said to be of the first order), the simplest form is that reducible to the type

dy/dx = ƒ(x)/F(y),

leading to the result ƒF(y)dy − ƒƒ(x)dx = A, where A is an arbitrary constant; this result is said to solve the differential equation, the problem of evaluating the integrals belonging to the integral calculus.

Another simple form is

dy/dx + yP = Q,

where P, Q are functions of x only; this is known as the linear equation, since it contains y and dy/dx only to the first degree. If ƒPdx = u, we clearly have

d (yeu) = eu ( dy + Py ) = euQ,
dx dx

so that y = e-u(ƒeuQdx + A) solves the equation, and is the only possible solution, A being an arbitrary constant. The rule for the solution of the linear equation is thus to multiply the equation by eu, where u = ƒPdx.

A third simple and important form is that denoted by

y = px + ƒ(p),

where p is an abbreviation for dy/dx; this is known as Clairaut’s form. By differentiation in regard to x it gives

p = p + x dp + ƒ′(p) dp ,
dx dx

where

ƒ′(p) = d ƒ(p);
dp

thus, either (i.) dp/dx = 0, that is, p is constant on the curve satisfying the differential equation, which curve is thus any one of the straight lines y = cx = ƒ(c), where c is an arbitrary constant, or else, (ii.) x + ƒ′(p) = 0; if this latter hypothesis be taken, and p be eliminated between x + ƒ′(p) = 0 and y = px + ƒ(p), a relation connecting x and y, not containing an arbitrary constant, will be found, which obviously represents the envelope of the straight lines y = cx + ƒ(c).

In general if a differential equation φ(x, y, dy/dx) = 0 be satisfied by any one of the curves F(x, y, c) = 0, where c is an arbitrary constant, it is clear that the envelope of these curves, when existent, must also satisfy the differential equation; for this equation prescribes a relation connecting only the co-ordinates x, y and the differential coefficient dy/dx, and these three quantities are the same at any point of the envelope for the envelope and for the particular curve of the family which there touches the envelope. The relation expressing the equation of the envelope is called a singular solution of the differential equation, meaning an isolated solution, as not being one of a family of curves depending upon an arbitrary parameter.

An extended form of Clairaut’s equation expressed by

y = xF(p) + ƒ(p)

may be similarly solved by first differentiating in regard to p, when it reduces to a linear equation of which x is the dependent and p the independent variable; from the integral of this linear equation, and the original differential equation, the quantity p is then to be eliminated.

Other types of solvable differential equations of the first order are (1)

M dy/dx = N,

where M, N are homogeneous polynomials in x and y, of the same order; by putting v = y/x and eliminating y, the equation becomes of the first type considered above, in v and x. An equation (aB ≷ bA)

(ax + by + c)dy/dx = Ax + By + C

may be reduced to this rule by first putting x + h, y + k for x and y, and determining h, k so that ah + bk + c = 0, Ah + Bk + C = 0.

(2) An equation in which y does not explicitly occur,

ƒ(x, dy/dx) = 0,

may, theoretically, be reduced to the type dy/dx = F(x); similarly an equation F(y, dy/dx) = 0.

(3) An equation

ƒ(dy/dx, x, y) = 0,

which is an integral polynomial in dy/dx, may, theoretically, be solved for dy/dx, as an algebraic equation; to any root dy/dx = F1(x, y) corresponds, suppose, a solution φ1(x, y, c) = 0, where c is an arbitrary constant; the product equation φ1(x, y, c)φ2(x, y, c) ... = 0, consisting of as many factors as there were values of dy/dx, is effectively as general as if we wrote φ1(x, y, c12(x, y, c2) ... = 0; for, to evaluate the first form, we must necessarily consider the factors separately, and nothing is then gained by the multiple notation for the various arbitrary constants. The equation φ1(x, y, c)φ2(x, y, c) ... = 0 is thus the solution of the given differential equation.

In all these cases there is, except for cases of singular solutions, one and only one arbitrary constant in the most general solution of the differential equation; that this must necessarily be so we may take as obvious, the differential equation being supposed to arise by elimination of this constant from the equation expressing its solution and the equation obtainable from this by differentiation in regard to x.

A further type of differential equation of the first order, of the form

dy/dx = A + By + Cy²

in which A, B, C are functions of x, will be briefly considered below under differential equations of the second order.

When we pass to ordinary differential equations of the second order, that is, those expressing a relation between x, y, dy/dx and d²y/dx², the number of types for which the solution can be found by a known procedure is very considerably reduced. Consider the general linear equation

d²y + P dy + Qy = R,
dx² dx

where P, Q, R are functions of x only. There is no method always effective; the main general result for such a linear equation is that if any particular function of x, say y1, can be discovered, for which

d²y1 + P dy1 + Qy1 = 0,
dx² dx

then the substitution y = y1η in the original equation, with R on the right side, reduces this to a linear equation of the first order with the dependent variable dη/dx. In fact, if y = y1η we have

dy = y1 + η dy1 and d²y = y1 d²η + 2 dy1 + η d²y1 ,
dx dx dx dx² dx² dx dx dx²

and thus

d²y + P dy + Qy = y1 d²η + (2 dy1 + Py1) + ( d²y1 + P dy1 + Qy1)η;
dx² dx dx² dx dx dx² dx

if then

d²y1 + P dy1 + Qy1 = 0,
dx² dx

and z denote dη/dx, the original differential equation becomes

y1 dz + ( 2 dy1 Py1) z = R.
dx dx

From this equation z can be found by the rule given above for the linear equation of the first order, and will involve one arbitrary constant; thence y = y1 η = y1 ∫ zdx + Ay1, where A is another arbitrary constant, will be the general solution of the original equation, and, as was to be expected, involves two arbitrary constants.

The case of most frequent occurrence is that in which the coefficients P, Q are constants; we consider this case in some detail. If θ be a root of the quadratic equation θ² + θP + Q = 0, it can be at once seen that a particular integral of the differential equation with zero on the right side is y1 = eθx. Supposing first the roots of the quadratic equation to be different, and φ to be the other root, so that φ + θ = -P, the auxiliary differential equation for z, referred to above, becomes dz/dx + (θ − φ)z = Re-θx which leads to ze(θ-φ) = B + ∫ Re-θxdx, where B is an arbitrary constant, and hence to

y = Aeθ x + eθ x Be(φ-θ) x dx + eθ x e(φ-θ) x Re x dxdx,

or say to y = Aeθ x + Ceθ x + U, where A, C are arbitrary constants and U is a function of x, not present at all when R = 0. If the quadratic equation θ² + Pθ + Q = 0 has equal roots, so that 2θ = -P, the auxiliary equation in z becomes dz/dx = Reθ x giving z = B + ∫ Reθ x dx, where B is an arbitrary constant, and hence

y = (A + Bx)eθ x + eθ x ∫ ∫ Re x dxdx,

or, say, y = (A + Bx)eθ x + U, where A, B are arbitrary constants, and U is a function of x not present at all when R = 0. The portion Aeθ x + Beθ x or (A + Bx)eθ x of the solution, which is known as the complementary function, can clearly be written down at once by inspection of the given differential equation. The remaining portion U may, by taking the constants in the complementary function properly, be replaced by any particular solution whatever of the differential equation

d²v + P dy + Qy = R;
dx² dx

for if u be any particular solution, this has a form

u = A0 eθ x + B0 eφ x + U,

or a form

u = (A0 + B0x) eθ x + U;

thus the general solution can be written

(A − A0)eθ x + (B − B0)eθ x + u, or {A − A0 + (B − B0)x} eθ x + u,

where A − A0, B − B0, like A, B, are arbitrary constants.

A similar result holds for a linear differential equation of any order, say

dny + P1 dn-1y + ... + Pny = R,
dxn dxn-1

where P1, P2, ... Pn are constants, and R is a function of x. If we form the algebraic equation θn + P1θn-1 + ... + Pn = 0, and all the roots of this equation be different, say they are θ1, θ2, ... θn, the general solution of the differential equation is

y = A1 eθ1 x + A2 eθ2 x + ... + An eθn x + u,

where A1, A2, ... An are arbitrary constants, and u is any particular solution whatever; but if there be one root θ1 repeated r times, the terms A1 eθ1 x + ... + Ar eθr x must be replaced by (A1 + A2x + ... + Arxr-1)eθ1 x where A1, ... An are arbitrary constants; the remaining terms in the complementary function will similarly need alteration of form if there be other repeated roots.

To complete the solution of the differential equation we need some method of determining a particular integral u; we explain a procedure which is effective for this purpose in the cases in which R is a sum of terms of the form eaxφ(x), where φ(x) is an integral polynomial in x; this includes cases in which R contains terms of the form cos bx·φ(x) or sin bx·φ(x). Denote d/dx by D; it is clear that if u be any function of x, D(eaxu) = eaxDu + aeaxu, or say, D(eaxu) = eax(D + a)u; hence D²(eaxu), i.e. d²/dx² (eaxu), being equal to D(eaxv), where v = (D + a)u, is equal to eax(D + a)v, that is to eax(D + a)²u. In this way we find Dn(eaxu) = eax(D + a)nu, where n is any positive integer. Hence if ψ(D) be any polynomial in D with constant coefficients, ψ(D) (eaxu) = eaxψ(D + a)u. Next, denoting ∫ udx by D-1u, and any solution of the differential equation dz/dx + az = u by z = (d + a)-1u, we have D[eax(D + a)-1u] = D(eaxz) = eax(D + a)z = eaxu, so that we may write D-1(eaxu) = eax(D + a)-1u, where the meaning is that one value of the left side is equal to one value of the right side; from this, the expression D-2(eaxu), which means D-1[D-1(eaxu)], is equal to D-1(eaxz) and hence to eax(D + a)-1z, which we write eax(D + a)-2u; proceeding thus we obtain

D-n(eaxu) = eax(D + a)-nu,

where n is any positive integer, and the meaning, as before, is that one value of the first expression is equal to one value of the second. More generally, if ψ(D) be any polynomial in D with constant coefficients, and we agree to denote by [1/ψ(D)]u any solution z of the differential equation ψ(D)z = u, we have, if v = [1/ψ(D + a)]u, the identity ψ(D)(eaxv) = eaxψ(D + a)v = eaxu, which we write in the form

1 (eaxu) = eax 1 u.
ψ(D) ψ(D + a)

This gives us the first step in the method we are explaining, namely that a solution of the differential equation ψ(D)y = eaxu + ebxv + ... where u, v, ... are any functions of x, is any function denoted by the expression

eax 1 u + ebx 1 v + ....
ψ(D + a) ψ(D + b)

It is now to be shown how to obtain one value of [1/ψ(D + a)]u, when u is a polynomial in x, namely one solution of the differential equation ψ(D + a)z = u. Let the highest power of x entering in u be xm; if t were a variable quantity, the rational fraction in t, 1/ψ(t + a) by first writing it as a sum of partial fractions, or otherwise, could be identically written in the form

Krt-r + Kr-1t-r+1 + ... + K1t-1 + H + H1t + ... + Hmtm + tm+1φ(t)/ψ(t + a),

where φ(t) is a polynomial in t; this shows that there exists an identity of the form

1 = ψ(t + a)(Krt−r + ... + K1t−1 + H + H1t + ... + Hmtm) + φ(t)tm+1,

and hence an identity

u = ψ(D + a) [KrD−r + ... + K1D−1 + H + H1D + ... + HmDm] u + φ(D) Dm+1u;

in this, since u contains no power of x higher than xm, the second term on the right may be omitted. We thus reach the conclusion that a solution of the differential equation ψ(D + a)z = u is given by

z = (KrD−r + ... + K1D−1 + H + H1D + ... + HmDm)u,

of which the operator on the right is obtained simply by expanding 1/ψ(D + a) in ascending powers of D, as if D were a numerical quantity, the expansion being carried as far as the highest power of D which, operating upon u, does not give zero. In this form every term in z is capable of immediate calculation.

Example.—For the equation

d4v + 2 d²y + y = x³ cos x or (D² + 1)²y = x³ cos x,
dx4 dx3

the roots of the associated algebraic equation (θ² + 1)² = 0 are θ = ±i, each repeated; the complementary function is thus

(A + Bx)eix + (C + Dx)e−ix,

where A, B, C, D are arbitrary constants; this is the same as

(H + Kx) cos x + (M + Nx) sin x,

where H, K, M, N are arbitrary constants. To obtain a particular integral we must find a value of (1 + D²)−²x³ cos x; this is the real part of (1 + D²)−² eixx³ and hence of eix [1 + (D + i)²]−² x³

or

eix [2iD(1 + ½iD)]−² x³,

or

−¼eix D−² (1 + iD − ¾D² − ½iD³ + 516D4 + 316iD5 ...)x³,

or

−¼eix (120x5 + ¼ix4 − ¾x³ − 32 ix² + 158 x + 98 i);

the real part of this is

−¼ (120 x5 − ¾x² + 158x) cos x + ¼ (¼x434x² + 98) sin x.

This expression added to the complementary function found above gives the complete integral; and no generality is lost by omitting from the particular integral the terms −1532x cos x + 932 sin x, which are of the types of terms already occurring in the complementary function.

The symbolical method which has been explained has wider applications than that to which we have, for simplicity of explanation, restricted it. For example, if ψ(x) be any function of x, and a1, a2, ... an be different constants, and [(t + a1) (t + a2) ... (t + an)]−1 when expressed in partial fractions be written Σcm(t + am)−1, a particular integral of the differential equation (D + a1)(D + a2) ... (D + an)y = ψ(x) is given by

y = Σcm(D + am)−1 ψ(x) = Σcm (D + am)−1 e−amxeamx ψ(x) = Σcme−amxD−1 (eamxψ(x) ) = Σcme−amx eamxψ(x)dx.

The particular integral is thus expressed as a sum of n integrals. A linear differential equation of which the left side has the form

xn dny + P1xn−1 dn−1y + ... + Pn−1x dy + Pny,
dxn dxn−1 dx

where P1, ... Pn are constants, can be reduced to the case considered above. Writing x = et we have the identity

xm dmu = θ(θ − 1)(θ − 2) ... (θ − m + 1)u, where θ = d/dt.
dxm

When the linear differential equation, which we take to be of the second order, has variable coefficients, though there is no general rule for obtaining a solution in finite terms, there are some results which it is of advantage to have in mind. We have seen that if one solution of the equation obtained by putting the right side zero, say y1, be known, the equation can be solved. If y2 be another solution of

d²y + P dy + Qy = 0,
dx² dx

there being no relation of the form my1 + ny2 = k, where m, n, k are constants, it is easy to see that

d (y1′y2 − y1y2′) = P(y1′y2 − y1y2′),
dx

so that we have

y1′y2 − y1y2′ = A exp. (∫ Pdx),

where A is a suitably chosen constant, and exp. z denotes ez. In terms of the two solutions y1, y2 of the differential equation having zero on the right side, the general solution of the equation with R = φ(x) on the right side can at once be verified to be Ay1 + By2 + y1u − y2v, where u, v respectively denote the integrals

u = y2φ(x) (y1′y2 − y2′y1)−1dx, v = y1φ(x) (y1′y2 − y2′y1)−1dx.

The equation

d²y + P dy + Qy = 0,
dx² dx

by writing y = v exp. (-½ ∫ Pdx), is at once seen to be reduced to d²v/dx² + Iv = 0, where I = Q − ½dP/dx − ¼P². If η = − 1/v dv/dx, the equation d²v/dx² + Iv = 0 becomes dη/dx = I + η², a non-linear equation of the first order.

More generally the equation

= A + Bη + Cη²,
dx

where A, B, C are functions of x, is, by the substitution

η = − 1 dy
Cy dx

reduced to the linear equation

d²y (B + 1 dC ) dy + ACy = 0.
dx² C dx dx

The equation

= A + Bη + Cη²,
dx

known as Riccati’s equation, is transformed into an equation of the same form by a substitution of the form η = (aY + b)/(cY + d), where a, b, c, d are any functions of x, and this fact may be utilized to obtain a solution when A, B, C have special forms; in particular if any particular solution of the equation be known, say η0, the substitution η = η0 − 1/Y enables us at once to obtain the general solution; for instance, when

2B = d log ( A ),
dx C

a particular solution is η0 = √(-A/C). This is a case of the remark, often useful in practice, that the linear equation

φ(x) d²y + ½ dy + μy = 0,
dx² dx dx

where μ is a constant, is reducible to a standard form by taking a new independent variable z = ∫ dx[φ(x)].

We pass to other types of equations of which the solution can be obtained by rule. We may have cases in which there are two dependent variables, x and y, and one independent variable t, the differential coefficients dx/dt, dy/dt being given as functions of x, y and t. Of such equations a simple case is expressed by the pair

dx = ax + by + c, dy a′x + b′y + c′,
dt dt

wherein the coefficients a, b, c, a′, b′, c′, are constants. To integrate these, form with the constant λ the differential coefficient of z = x + λy, that is dz/dt = (a + λa′)x + (b + λb′)y + c + λc′, the quantity λ being so chosen that b + λb′ = λ(a + λa′), so that we have dz/dt = (a + λa′)z + c + λc′; this last equation is at once integrable in the form z(a + λa′) + c + λc′ = Ae(a + λa′)t, where A is an arbitrary constant. In general, the condition b + λb′ = λ(a + λa′) is satisfied by two different values of λ, say λ1, λ2; the solutions corresponding to these give the values of x +λ1y and x + λ2y, from which x and y can be found as functions of t, involving two arbitrary constants. If, however, the two roots of the quadratic equation for λ are equal, that is, if (a − b′)² + 4a′b = 0, the method described gives only one equation, expressing x + λy in terms of t; by means of this equation y can be eliminated from dx/dt = ax + by + c, leading to an equation of the form dx/dt = Px + Q + Re(a + λa′)t, where P, Q, R are constants. The integration of this gives x, and thence y can be found.

A similar process is applicable when we have three or more dependent variables whose differential coefficients in regard to the single independent variables are given as linear functions of the dependent variables with constant coefficients.

Another method of solution of the equations

dx/dt = ax + by + c, dy/dt = a′x + b′y + c′,

consists in differentiating the first equation, thereby obtaining

d²x = a dx + b dy ;
dt² dt dx

from the two given equations, by elimination of y, we can express dy/dt as a linear function of x and dx/dt; we can thus form an equation of the shape d²x/dt² = P + Qx + Rdx/dt, where P, Q, R are constants; this can be integrated by methods previously explained, and the integral, involving two arbitrary constants, gives, by the equation dx/dt = ax + by + c, the corresponding value of y. Conversely it should be noticed that any single linear differential equation

d²x = u + vx + w dx ,
dt² dt

where u, v, w are functions of t, by writing y for dx/dt, is equivalent with the two equations dx/dt = y, dy/dt = u + vx + wy. In fact a similar reduction is possible for any system of differential equations with one independent variable.

Equations occur to be integrated of the form

Xdx + Ydy + Zdz = 0,

where X, Y, Z are functions of x, y, z. We consider only the case in which there exists an equation φ(x, y, z) = C whose differential

∂φ dx + ∂φ dy + ∂φ dz = 0
∂x ∂y ∂z

is equivalent with the given differential equation; that is, μ being a proper function of x, y, z, we assume that there exist equations

∂φ = μX, ∂φ = μY, ∂φ = μZ;
∂x ∂y ∂z

these equations require

(μY) ≈ (μZ), &c.,
∂z ∂y

and hence

X( ∂Z ∂Y ) + Y( ∂X ∂Z ) + Z( ∂Y ∂X ) = 0;
∂y ∂z ∂z ∂x ∂x ∂y

conversely it can be proved that this is sufficient in order that μ may exist to render μ(Xdx + Ydy + Zdz) a perfect differential; in particular it may be satisfied in virtue of the three equations such as

∂Z ∂Y = 0;
∂y ∂z

in which case we may take μ = 1. Assuming the condition in its general form, take in the given differential equation a plane section of the surface φ = C parallel to the plane z, viz. put z constant, and consider the resulting differential equation in the two variables x, y, namely Xdx + Ydy = 0; let ψ(x, y, z) = constant, be its integral, the constant z entering, as a rule, in ψ because it enters in X and Y. Now differentiate the relation ψ(x, y, z) = ƒ(z), where ƒ is a function to be determined, so obtaining

∂ψ dx + ∂ψ dy + ( ∂ψ ) dz = 0;
∂x ∂y ∂z dz

there exists a function σ of x, y, z such that

∂ψ = σX, ∂ψ = σY,
∂x ∂y

because ψ = constant, is the integral of Xdx + Ydy = 0; we desire to prove that ƒ can be chosen so that also, in virtue of ψ(x, y, z) = ƒ(z), we have

∂ψ = σZ, namely = ∂ψ − σZ;
∂z dz dz ∂z

if this can be proved the relation ψ(x, y, z) − ƒ(z) = constant, will be the integral of the given differential equation. To prove this it is enough to show that, in virtue of ψ(x, y, z) = ƒ(z), the function ∂ψ/∂x − σZ can be expressed in terms of z only. Now in consequence of the originally assumed relations,

∂ψ = μX, ∂φ = μY, ∂φ = μZ,
∂x ∂y ∂z

we have

∂ψ / ∂φ = σ = ∂ψ / ∂φ ,
∂x ∂x μ ∂y ∂y

and hence

∂ψ ∂φ ∂ψ ∂φ = 0;
∂x ∂y ∂y ∂x

this shows that, as functions of x and y, ψ is a function of φ (see the note at the end of part i. of this article, on Jacobian determinants), so that we may write ψ = F(z, φ), from which

σ = ∂F ; then ∂ψ = ∂F + ∂F ∂φ = ∂F + σ · μZ = ∂F + σZ or ∂ψ − σZ = ∂F ;
μ ∂φ ∂z ∂z ∂φ ∂z ∂z μ ∂z ∂z ∂z

in virtue of ψ(x, y, z) = ƒ(z), and ψ = F(z, φ), the function φ can be written in terms of z only, thus ∂F/∂z can be written in terms of z only, and what we required to prove is proved.

Consider lastly a simple type of differential equation containing two independent variables, say x and y, and one dependent variable z, namely the equation

P ∂z + Q ∂z = R,
∂x ∂y

where P, Q, R are functions of x, y, z. This is known as Lagrange’s linear partial differential equation of the first order. To integrate this, consider first the ordinary differential equations dx/dz = P/R, dy/dz = Q/R, and suppose that two functions u, v, of x, y, z can be determined, independent of one another, such that the equations u = a, v = b, where a, b are arbitrary constants, lead to these ordinary differential equations, namely such that

P ∂u + Q ∂u + R ∂u = 0 and P ∂v + Q ∂v + R ∂v = 0.
∂x ∂y ∂z ∂x ∂y ∂z

Then if F(x, y, z) = 0 be a relation satisfying the original differential equations, this relation giving rise to

∂F + ∂F ∂z = 0 and ∂F + ∂F ∂z = 0, we have P ∂F + Q ∂F + R ∂F = 0.
∂x ∂z ∂x ∂y ∂z ∂y ∂x ∂y ∂z

It follows that the determinant of three rows and columns vanishes whose first row consists of the three quantities ∂F/∂x, ∂F/∂y, ∂F/∂z, whose second row consists of the three quantities ∂u/∂x, ∂u/∂y, ∂u/∂z, whose third row consists similarly of the partial derivatives of v. The vanishing of this so-called Jacobian determinant is known to imply that F is expressible as a function of u and v, unless these are themselves functionally related, which is contrary to hypothesis (see the note below on Jacobian determinants). Conversely, any relation φ(u, v) = 0 can easily be proved, in virtue of the equations satisfied by u and v, to lead to

P dz + Q dz = R.
dx dx

The solution of this partial equation is thus reduced to the solution of the two ordinary differential equations expressed by dx/P = dy/Q = dz/R. In regard to this problem one remark may be made which is often of use in practice: when one equation u = a has been found to satisfy the differential equations, we may utilize this to obtain the second equation v = b; for instance, we may, by means of u = a, eliminate z—when then from the resulting equations in x and y a relation v = b has been found containing x and y and a, the substitution a = u will give a relation involving x, y, z.

Note on Jacobian Determinants.—The fact assumed above that the vanishing of the Jacobian determinant whose elements are the partial derivatives of three functions F, u, v, of three variables x, y, z, involves that there exists a functional relation connecting the three functions F, u, v, may be proved somewhat roughly as follows:—

The corresponding theorem is true for any number of variables. Consider first the case of two functions p, q, of two variables x, y. The function p, not being constant, must contain one of the variables, say x; we can then suppose x expressed in terms of y and the function p; thus the function q can be expressed in terms of y and the function p, say q = Q(p, y). This is clear enough in the simplest cases which arise, when the functions are rational. Hence we have

∂q = ∂Q ∂p and ∂q = ∂Q ∂p + ∂Q ;
∂x ∂p ∂x ∂y ∂p ∂y ∂y

these give

∂p ∂q ∂p ∂q = ∂p ∂Q ;
∂x ∂y ∂y ∂x ∂x ∂y

by hypothesis ∂p/∂x is not identically zero; therefore if the Jacobian determinant of p and q in regard to x and y is zero identically, so is ∂Q/∂y, or Q does not contain y, so that q is expressible as a function of p only. Conversely, such an expression can be seen at once to make the Jacobian of p and q vanish identically.

Passing now to the case of three variables, suppose that the Jacobian determinant of the three functions F, u, v in regard to x, y, z is identically zero. We prove that if u, v are not themselves functionally connected, F is expressible as a function of u and v. Suppose first that the minors of the elements of ∂F/∂x, ∂F/∂y, ∂F/∂z in the determinant are all identically zero, namely the three determinants such as

∂u ∂v ∂u ∂v ;
∂y ∂z ∂z ∂y

then by the case of two variables considered above there exist three functional relations. ψ1(u, v, x) = 0, ψ2(u, v, y) = 0, ψ3(u, v, z) = 0, of which the first, for example, follows from the vanishing of

∂u ∂v ∂u ∂v .
∂y ∂z ∂z ∂y

We cannot assume that x is absent from ψ1, or y from ψ2, or z from ψ3; but conversely we cannot simultaneously have x entering in ψ1, and y in ψ2, and z in ψ3, or else by elimination of u and v from the three equations ψ1 = 0, ψ2 = 0, ψ3 = 0, we should find a necessary relation connecting the three independent quantities x, y, z; which is absurd. Thus when the three minors of ∂F/∂x, ∂F/∂y, ∂F/∂z in the Jacobian determinant are all zero, there exists a functional relation connecting u and v only. Suppose no such relation to exist; we can then suppose, for example, that

∂u ∂v ∂u ∂v
∂y ∂z ∂z ∂y

is not zero. Then from the equations u(x, y, z) = u, v(x, y, z) = v we can express y and z in terms of u, v, and x (the attempt to do this could only fail by leading to a relation connecting u, v and x, and the existence of such a relation would involve that the determinant

∂u ∂v ∂u ∂v
∂y ∂z ∂z ∂y

was zero), and so write F in the form F(x, y, z) = Φ(u, v, x). We then have

∂F = ∂Φ ∂u + ∂Φ ∂v + ∂Φ , ∂F = ∂Φ ∂u + ∂Φ ∂v , ∂F = ∂Φ ∂u + ∂Φ ∂v ;
∂x ∂u ∂x ∂v ∂x ∂x ∂y ∂u ∂y ∂v ∂y ∂z ∂u ∂z ∂v ∂z

thereby the Jacobian determinant of F, u, v is reduced to

∂Φ ( ∂u ∂v ∂u ∂v );
∂x ∂y ∂z ∂z ∂y

by hypothesis the second factor of this does not vanish identically; hence ∂Φ/∂x = 0 identically, and Φ does not contain x; so that F is expressible in terms of u, v only; as was to be proved.

Part II.—General Theory.

Differential equations arise in the expression of the relations between quantities by the elimination of details, either unknown or regarded as unessential to the formulation of the relations in question. They give rise, therefore, to the two closely connected problems of determining what arrangement of details is consistent with them, and of developing, apart from these details, the general properties expressed by them. Very roughly, two methods of study can be distinguished, with the names Transformation-theories, Function-theories; the former is concerned with the reduction of the algebraical relations to the fewest and simplest forms, eventually with the hope of obtaining explicit expressions of the dependent variables in terms of the independent variables; the latter is concerned with the determination of the general descriptive relations among the quantities which are involved by the differential equations, with as little use of algebraical calculations as may be possible. Under the former heading we may, with the assumption of a few theorems belonging to the latter, arrange the theory of partial differential equations and Pfaff’s problem, with their geometrical interpretations, as at present developed, and the applications of Lie’s theory of transformation-groups to partial and to ordinary equations; under the latter, the study of linear differential equations in the manner initiated by Riemann, the applications of discontinuous groups, the theory of the singularities of integrals, and the study of potential equations with existence-theorems arising therefrom. In order to be clear we shall enter into some detail in regard to partial differential equations of the first order, both those which are linear in any number of variables and those not linear in two independent variables, and also in regard to the function-theory of linear differential equations of the second order. Space renders impossible anything further than the briefest account of many other matters; in particular, the theories of partial equations of higher than the first order, the function-theory of the singularities of ordinary equations not linear and the applications to differential geometry, are taken account of only in the bibliography. It is believed that on the whole the article will be more useful to the reader than if explanations of method had been further curtailed to include more facts.

When we speak of a function without qualification, it is to be understood that in the immediate neighbourhood of a particular set x0, y0, ... of values of the independent variables x, y, ... of the function, at whatever point of the range of values for x, y, ... under consideration x0, y0, ... may be chosen, the function can be expressed as a series of positive integral powers of the differences x − x0, y − y0, ..., convergent when these are sufficiently small (see Function: Functions of Complex Variables). Without this condition, which we express by saying that the function is developable about x0, y0, ..., many results provisionally stated in the transformation theories would be unmeaning or incorrect. If, then, we have a set of k functions, ƒ1 ... ƒk of n independent variables x1 ... xn, we say that they are independent when n ≥ k and not every determinant of k rows and columns vanishes of the matrix of k rows and n columns whose r-th row has the constituents dƒr/dx1, ... dƒr/dxn; the justification being in the theorem, which we assume, that if the determinant involving, for instance, the first k columns be not zero for x1 = xº1 ... xn = xºn, and the functions be developable about this point, then from the equations ƒ1 = c1, ... ƒk = ck we can express x1, ... xk by convergent power series in the differences xk+1 − xºk+1, ... xn − x, and so regard x1, ... xk as functions of the remaining variables. This we often express by saying that the equations ƒ1 = c1, ... ƒk = ck can be solved for x1, ... xk. The explanation is given as a type of explanation often understood in what follows.

We may conveniently begin by stating the theorem: If each of the n functions φ1, ... φn of the (n + 1) variables x1, ... xnt be developable Ordinary equations of the first order. about the values xº1, ... xn0t0, the n differential equations of the form dx1/dt = φ1(tx1, ... xn) are satisfied by convergent power series

xr = xºr + (t − t0) Ar1 + (t − t0)² Ar2 + ...

reducing respectively to xº1, ... xºn when t = t0; and the only functions satisfying the equations and reducing respectively to xº1, ... xºn when t = t0, are those determined by continuation of these series. If the result of solving these n equations for xº1, ... xºn be written in the form ω1(x1, ... xnt) = xº1, ... ωn(x1, ... xnt) = xºn, Single homogeneous partial equation of the first order. it is at once evident that the differential equation

dƒ/dt + φ1dƒ/dx1 + ... + φndƒ/dxn = 0

possesses n integrals, namely, the functions ω1, ... ωn, which are developable about the values (xº1 ... xn0t0) and reduce respectively to x1, ... xn when t = t0. And in fact it has no other integrals so reducing. Thus this equation also possesses a unique integral reducing when t = t0 to an arbitrary function ψ(x1, ... xn), this integral being. ψ(ω1, ... ωn). Conversely the existence of these principal integrals ω1, ... ωn of the partial equation establishes the existence of the specified solutions of the ordinary equations dxi/dt = φi. The following sketch of the proof of the existence of these principal integrals for the case n = 2 will show the character of more general investigations. Put x for x − x0, &c., and consider the equation a(xyt) dƒ/dx + b(xyt) dƒ/dy = dƒ/dt, wherein the functions a, b are developable about x = 0, y = 0, t = 0; say

a(xyt) = a0 + ta1 + t²a2/2! + ..., b(xyt) = b0 + tb1 + t²b2/2! + ...,

so that

ad/dx + bd/dy = δ0 + tδ1 + ½t²δ2 + ...,

where δ = ard/dx + brd/dy. In order that

ƒ = p0 + tp1 + t²p2/2! + ...

wherein p0, p1 ... are power series in x, y, should satisfy the equation, it is necessary, as we find by equating like terms, that

p1 = δ0p0, p2 = δ0p1 + δ1p0, &c.

and in generalProof of the existence of integrals.

ps+1 = δ0ps + s1δ1ps-1 + s2δ2ps-2 +... + δsp0,

where

sr = (s!)/(r!) (s − r)!

Now compare with the given equation another equation

A(xyt)dF/dx + B(xyt)dF/dy = dF/dt,

wherein each coefficient in the expansion of either A or B is real and positive, and not less than the absolute value of the corresponding coefficient in the expansion of a or b. In the second equation let us substitute a series

F = P0 + tP1 + t²P2/2! + ...,

wherein the coefficients in P0 are real and positive, and each not less than the absolute value of the corresponding coefficient in p0; then putting Δr = Ard/dx + Brd/dy we obtain necessary equations of the same form as before, namely,

P1 = Δ0P0, P2 = Δ0P1 + Δ1P0, ...

and in general Ps+1 = Δ0Ps, + s1Δ1Ps-1 + ... + ΔsP0. These give for every coefficient in Ps+1 an integral aggregate with real positive coefficients of the coefficients in Ps, Ps-1, ..., P0 and the coefficients in A and B; and they are the same aggregates as would be given by the previously obtained equations for the corresponding coefficients in ps+1 in terms of the coefficients in ps, ps-1, ..., p0 and the coefficients in a and b. Hence as the coefficients in P0 and also in A, B are real and positive, it follows that the values obtained in succession for the coefficients in P1, P2, ... are real and positive; and further, taking account of the fact that the absolute value of a sum of terms is not greater than the sum of the absolute values of the terms, it follows, for each value of s, that every coefficient in ps+1 is, in absolute value, not greater than the corresponding coefficient in Ps+1. Thus if the series for F be convergent, the series for ƒ will also be; and we are thus reduced to (1), specifying functions A, B with real positive coefficients, each in absolute value not less than the corresponding coefficient in a, b; (2) proving that the equation

AdF/dx + BdF/dy = dF/dt

possesses an integral P0 + tP1 + t²P2/2! + ... in which the coefficients in P0 are real and positive, and each not less than the absolute value of the corresponding coefficient in p0. If a, b be developable for x, y both in absolute value less than r and for t less in absolute value than R, and for such values a, b be both less in absolute value than the real positive constant M, it is not difficult to verify that we may take A = B = M[1 − (x + y)/r]-1 (1 − t/R)-1, and obtain

F = r − (r − x − y) [ 1 − 4MR (1 − x + y ) −2 log ( 1 − t ) −1 ] 1/2 ,
r r   R    

and that this solves the problem when x, y, t are sufficiently small for the two cases p0 = x, p0 = y. One obvious application of the general theorem is to the proof of the existence of an integral of an ordinary linear differential equation given by the n equations dy/dx = y1, dy1/dx = y2, ...,

dyn-1/dx = p − p1yn-1 − ... − pny;

but in fact any simultaneous system of ordinary equations is reducible to a system of the form

dxi/dt = φi(tx1, ... xn).

Suppose we have k homogeneous linear partial equations of the first order in n independent variables, the general equation being aσ1dƒ/dx1 + ... + aσndƒ/dxn = 0, where σ = 1, ... k, and that Simultaneous linear partial equations. we desire to know whether the equations have common solutions, and if so, how many. It is to be understood that the equations are linearly independent, which implies that k ≤ n and not every determinant of k rows and columns is identically zero in the matrix in which the i-th element of the σ-th row is aσi}(i = 1, ... n, σ = 1, ... k). Denoting the left side of the σ-th equation by Pσƒ, it is clear that every common solution of the two equations Pσƒ = 0, Pρƒ = 0, is also a solution of the equation Pρ(pσƒ), Pσ(pρƒ), We immediately find, however, that this is also a linear equation, namely, ΣHidƒ/dxi = 0 where Hi = Pρaσ − Pσaρ, and if it be not already contained among the given equations, or be linearly deducible from them, it may be added to them, as not introducing any additional limitation of the possibility of their having common solutions. Proceeding thus with every pair of the original equations, and then with every pair of the possibly augmented system so obtained, and so on continually, we shall arrive at a system of equations, linearly independent of each other and therefore not more than n in number, such that the combination, in the way described, of every pair of them, leads to an equation which is linearly deducible from them. If the number of this so-called complete system is n, the equations give dƒ/dx1 = 0 ... dƒ/dxn = 0, leading to the nugatory result ƒ = a constant. Suppose, then, the number of this system to be r < n; suppose, further, that from the Complete systems of linear partial equations. matrix of the coefficients a determinant of r rows and columns not vanishing identically is that formed by the coefficients of the differential coefficients of ƒ in regard to x1 ... xr; also that the coefficients are all developable about the values x1 = xº1, ... xn= xºn, and that for these values the determinant just spoken of is not zero. Then the main theorem is that the complete system of r equations, and therefore the originally given set of k equations, have in common n − r solutions, say ωr+1, ... ωn, which reduce respectively to xr+1, ... xn when in them for x1, ... xr are respectively put xº1, ... xºr; so that also the equations have in common a solution reducing when x1 = xº1, ... xr = xºr to an arbitrary function ψ(xr+1, ... xn) which is developable about xºr+1, ... xºn, namely, this common solution is ψ(ωr+1, ... ωn). It is seen at once that this result is a generalization of the theorem for r = 1, and its proof is conveniently given by induction from that case. It can be verified without difficulty (1) that if from the r equations of the complete system we form r independent linear aggregates, with coefficients not necessarily constants, the new system is also a complete system; (2) that if in place of the independent variables x1, ... xn we introduce any other variables which are independent functions of the former, the new equations also form a complete system. It is convenient, then, from the complete system of r equations to form r new equations by solving separately for dƒ/dx1, ..., dƒ/dxr; suppose the general equation of the new system to be

Qσƒ = dƒ/dxσ + cσjr+1dƒ/dxr+1 + ... + cσndƒ/dxn = 0 (σ = 1, ... r).

Then it is easily obvious that the equation QρQσƒ − QσQρƒ = 0 contains only the differential coefficients of ƒ in regard to xr+1 ... xn; as it is at most a linear function of Q1ƒ, ... Qrƒ, it must be identically zero. So reduced the system is called a Jacobian system. Of this system Q1ƒ=0 has n − 1 principal solutions reducing respectively Jacobian systems. to x2, ... xn when

x1 = xº1,

and its form shows that of these the first r − 1 are exactly x2 ... xr. Let these n − 1 functions together with x1 be introduced as n new independent variables in all the r equations. Since the first equation is satisfied by n − 1 of the new independent variables, it will contain no differential coefficients in regard to them, and will reduce therefore simply to dƒ/dx1 = 0, expressing that any common solution of the r equations is a function only of the n − 1 remaining variables. Thereby the investigation of the common solutions is reduced to the same problem for r − 1 equations in n − 1 variables. Proceeding thus, we reach at length one equation in n − r + 1 variables, from which, by retracing the analysis, the proposition stated is seen to follow.

The analogy with the case of one equation is, however, still closer. With the coefficients cσj, of the equations Qσƒ = 0 in transposed array (σ = 1, ... r, j = r + 1, ... n) we can put down the (n − r) equations, dxj = c1jdx1 + ... + crjdxr, equivalent to System of total differential equations. the r(n − r) equations dxj/dxσ = cσr. That consistent with them we may be able to regard xr+1, ... xn as functions of x1, ... xr, these being regarded as independent variables, it is clearly necessary that when we differentiate cσj in regard to xρ on this hypothesis the result should be the same as when we differentiate cρj, in regard to xσ on this hypothesis. The differential coefficient of a function ƒ of x1, ... xn on this hypothesis, in regard to xρj is, however,

dƒ/dxρ + cρjr+1dƒ/dxr+1 + ... + cρndƒ/dxn,

namely, is Qρƒ. Thus the consistence of the n − r total equations requires the conditions Qρcσj − Qσcρj = 0, which are, however, verified in virtue of Qρ(Qσƒ) − Qσ(Qρƒ) = 0. And it can in fact be easily verified that if ωr+1, ... ωn be the principal solutions of the Jacobian system, Qσƒ = 0, reducing respectively to xr+1, ... xn when x1 = xº1, ... xr = xºr, and the equations ωr+1 = x0r+1, ... ωn = xºn be solved for xr+1, ... xn to give xj = ψj(x1, ... xr, x0r+1, ... xºn), these values solve the total equations and reduce respectively to x0r+1, ... xºn when x1 = xº1 ... xr = xºr. And the total equations have no other solutions with these initial values. Conversely, the existence of these solutions of the total equations can be deduced a priori and the theory of the Jacobian system based upon them. The theory of such total equations, in general, finds its natural place under the heading Pfaffian Expressions, below.

A practical method of reducing the solution of the r equations of a Jacobian system to that of a single equation in n − r + 1 variables may be explained in connexion with a geometrical interpretation which will perhaps be clearer in a particular Geometrical interpretation and solution. case, say n = 3, r = 2. There is then only one total equation, say dz = adz + bdy; if we do not take account of the condition of integrability, which is in this case da/dy + bda/dz = db/dx + adb/dz, this equation may be regarded as defining through an arbitrary point (x0, y0, z0) of three-dimensioned space (about which a, b are developable) a plane, namely, z − z0 = a0(x − x0) + b0(y − y0), and therefore, through this arbitrary point ∞2 directions, namely, all those in the plane. If now there be a surface z = ψ(x, y), satisfying dz = adz + bdy and passing through (x0, y0, z0), this plane will touch the surface, and the operations of passing along the surface from (x0, y0, z0) to

(x0 + dx0, y0, z0 + dz0)

and then to (x0 + dx0, y0 + dy0, Z0 + d1z0), ought to lead to the same value of d1z0 as do the operations of passing along the surface from (x0, y0, z0) to (x0, y0 + dy0, z0 + δz0), and then to

(x0 + dx0, y0 + dy0, z0 + δ1z0),

namely, δ1z0 ought to be equal to d1z0. But we find

a0dx0 + b0dy0 + dx0dy0( db + a0 db ),
dx0 dz0

and so at once reach the condition of integrability. If now we put x = x0 + t, y = y0 + mt, and regard m as constant, we shall in fact be considering the section of the surface by a fixed plane y − y0 = m(x − x0); along this section dz = dt(a + bm); if we then integrate the equation dx/dt = a + bm, where a, b are expressed as functions of m and t, with m kept constant, finding the solution which reduces to z0 for t = 0, and in the result again replace m by (y − y0)/(x − x0), we shall have the surface in question. In the general case the equations

dxj = cijdx1 + ... crjdxr

similarly determine through an arbitrary point xº1, ... xºn Mayer’s method of integration. a planar manifold of r dimensions in space of n dimensions, and when the conditions of integrability are satisfied, every direction in this manifold through this point is tangent to the manifold of r dimensions, expressed by ωr+1 = x0r+1, ... ω_ = xºn, which satisfies the equations and passes through this point. If we put x1 − xº1 = t, x2 − xº2 = m2t, ... xr − xºr = mrt, and regard m2, ... mr as fixed, the (n − r) total equations take the form dxj/dt = c1j + m2c2j + ... + mrcrj, and their integration is equivalent to that of the single partial equation

dƒ/dt + Σ n (c1j + m2c2j + ... + mrcrj) dƒ/dxj = 0
j=r+1

in the n − r + 1 variables t, xr+1, ... xn. Determining the solutions Ωr+1, ... Ωn which reduce to respectively xr+1, ... xn when t = 0, and substituting t = x1 − xº1, m2 = (x2 − xº2)/(x1 − xº1), ... mr = (xr − xºr)/(x1 − xº1), we obtain the solutions of the original system of partial equations previously denoted by ωr+1, ... ωn. It is to be remarked, however, that the presence of the fixed parameters m2, ... mr in the single integration may frequently render it more difficult than if they were assigned numerical quantities.

We have above considered the integration of an equation

dz = adz + bdy

on the hypothesis that the condition

da/dy + bda/dz = db/dz + adb/dz.

It is natural to inquire what relations among x, y, z, if any, Pfaffian Expressions. are implied by, or are consistent with, a differential relation adx + bdy + cdx = 0, when a, b, c are unrestricted functions of x, y, z. This problem leads to the consideration of the so-called Pfaffian Expression adx + bdy + cdz. It can be shown (1) if each of the quantities db/dz − dc/dy, dc/dx − da/dz, da/dy − db/dz, which we shall denote respectively by u23, u31, u12, be identically zero, the expression is the differential of a function of x, y, z, equal to dt say; (2) that if the quantity au23 + bu31 + cu12 is identically zero, the expression is of the form udt, i.e. it can be made a perfect differential by multiplication by the factor 1/u; (3) that in general the expression is of the form dt + u1dt1. Consider the matrix of four rows and three columns, in which the elements of the first row are a, b, c, and the elements of the (r + 1)-th row, for r = 1, 2, 3, are the quantities ur1, ur2, ur3, where u11 = u22 = u33 = 0. Then it is easily seen that the cases (1), (2), (3) above correspond respectively to the cases when (1) every determinant of this matrix of two rows and columns is zero, (2) every determinant of three rows and columns is zero, (3) when no condition is assumed. This result can be generalized as follows: if a1, ... an be any functions of x1, ... xn, the so-called Pfaffian expression a1dx1 + ... + andxn can be reduced to one or other of the two forms

u1dt1 + ... + ukdtk, dt + u1dt1 + ... + uk-1dtk-1,

wherein t, u1 ..., t1, ... are independent functions of x1, ... xn, and k is such that in these two cases respectively 2k or 2k − 1 is the rank of a certain matrix of n + 1 rows and n columns, that is, the greatest number of rows and columns in a non-vanishing determinant of the matrix; the matrix is that whose first row is constituted by the quantities a1, ... an, whose s-th element in the (r + 1)-th row is the quantity dar/dxs − das/dxr. The proof of such a reduced form can be obtained from the two results: (1) If t be any given function of the 2m independent variables u1, ... um, t1, ... tm, the expression dt + u1dt1 + ... + umdtm can be put into the form u′1dt′1 + ... + u′mdt′m. (2) If the quantities u1, ..., u1, t1, ... tm be connected by a relation, the expression n1dt1 + ... + umdtm can be put into the format dt′ + u′1dt′1 + ... + u′m-1dt′m-1; and if the relation connecting u1, um, t1, ... tm be homogeneous in u1, ... um, then t′ can be taken to be zero. These two results are deductions from the theory of contact transformations (see below), and their demonstration requires, beside elementary algebraical considerations, only the theory of complete systems of linear homogeneous partial differential equations of the first order. When the existence of the reduced form of the Pfaffian expression containing only independent quantities is thus once assured, the identification of the number k with that defined by the specified matrix may, with some difficulty, be made a posteriori.

In all cases of a single Pfaffian equation we are thus led to consider what is implied by a relation dt − u1dt1 − ... − umdtm = 0, in which t, u1, ... um, t1 ..., tm are, except for this equation, independent variables. This is to be satisfied in virtue of Single linear Pfaffian equation. one or several relations connecting the variables; these must involve relations connecting t, t1, ... tm only, and in one of these at least t must actually enter. We can then suppose that in one actual system of relations in virtue of which the Pfaffian equation is satisfied, all the relations connecting t, t1 ... tm only are given by

t = ψ(ts+1 ... tm), t1 = ψ1(ts+1 ... tm), ... ts = ψs(ts+1 ... tm);

so that the equation

dψ − u11 − ... − uss − us+1dts+1 − ... − umdtm = 0

is identically true in regard to u1, ... um, ts+1 ..., tm; equating to zero the coefficients of the differentials of these variables, we thus obtain m − s relations of the form

dψ/dtj − u11/dtj − ... − uss/dtj − uj = 0;

these m − s relations, with the previous s + 1 relations, constitute a set of m + 1 relations connecting the 2m + 1 variables in virtue of which the Pfaffian equation is satisfied independently of the form of the functions ψ,ψ1, ... ψs. There is clearly such a set for each of the values s = 0, s = 1, ..., s = m − 1, s = m. And for any value of s there may exist relations additional to the specified m + 1 relations, provided they do not involve any relation connecting t, t1, ... tm only, and are consistent with the m − s relations connecting u1, ... um. It is now evident that, essentially, the integration of a Pfaffian equation

a1dx1 + ... + andxn = 0,

wherein a1, ... an are functions of x1, ... xn, is effected by the processes necessary to bring it to its reduced form, involving only independent variables. And it is easy to see that if we suppose this reduction to be carried out in all possible ways, there is no need to distinguish the classes of integrals corresponding to the various values of s; for it can be verified without difficulty that by putting t′ = t − u1t1 − ... − usts, t′1 = u1, ... t′s = us, u′1 = −t1, ..., u′s = −ts, t′s+1 = ts+1, ... t′m = tm, u′s+1 = us+1, ... u′m = um, the reduced equation becomes changed to dt′ − u′1dt′1 − ... − u′mdt′m = 0, and the general relations changed to

t′ = ψ(t′s+1, ... t′m) − t′1ψ1(t′s+1, ... t′m) − ... − t′sψs(t′s+1, ... t′m), = φ,

say, together with u′1 = dφ/dt′1, ..., u′m = dφ/dt′m, which contain only one relation connecting the variables t′, t′1, ... t′m only.

This method for a single Pfaffian equation can, strictly speaking, be generalized to a simultaneous system of (n − r) Pfaffian equations dxj = c1jdx1 + ... + crjdxr only in the case already treated, Simultaneous Pfaffian equations. when this system is satisfied by regarding xr+1, ... xn as suitable functions of the independent variables x1, ... xr; in that case the integral manifolds are of r dimensions. When these are non-existent, there may be integral manifolds of higher dimensions; for if

dφ = φ1dx1 + ... + φrdxr + φr+1(c1,r+1dx1 + ... + cr,r+1dxr) + φr+2( ) + ...

be identically zero, then φσ + cσ,r+1φr+1 + ... + cσ,nφn ≈ 0, or φ satisfies the r partial differential equations previously associated with the total equations; when these are not a complete system, but included in a complete system of r − μ equations, having therefore n − r − μ independent integrals, the total equations are satisfied over a manifold of r + μ dimensions (see E. v. Weber, Math. Annal. 1v. (1901), p. 386).

It seems desirable to add here certain results, largely of algebraic character, which naturally arise in connexion with the theory of contact transformations. For any two functions of the 2n Contact transformations. independent variables x1, ... xn, p1, ... pn we denote by (φψ) the sum of the n terms such as dφdψ/dpidxi − dψdφ/dpidxi For two functions of the (2n + 1) independent variables z, x1, ... xn, p1, ... pn we denote by φψ the sum of the n terms such as

( + pi ) ( pi ).
dpi dxi dz dpi dxi dz

It can at once be verified that for any three functions [ƒ[φψ]] + [φ[ψƒ]] + [psi[ƒφ]] = dƒ/dz [φψ] + dφ/dz [ψƒ] + dψ/dz [ƒφ], which when ƒ, φ,ψ do not contain z becomes the identity (ƒ(φψ)) + (phi(ψƒ)) + (ψ(ƒφ)) = 0.Then, if X1, ... Xn, P1, ... Pn be such functions Of x1, ... xn, p1 ... pn that P1dX1 + ... + PndXn is identically equal to p1dx1 + ... + pndxn, it can be shown by elementary algebra, after equating coefficients of independent differentials, (1) that the functions X1, ... Pn are independent functions of the 2n variables x1, ... pn, so that the equations x′i = Xi, p′i = Pi can be solved for x1, ... xn, p1, ... pn, and represent therefore a transformation, which we call a homogeneous contact transformation; (2) that the X1, ... Xn are homogeneous functions of p1, ... pn of zero dimensions, the P1, ... Pn are homogeneous functions of p1, ... pn of dimension one, and the ½n(n − 1) relations (XiXj) = 0 are verified. So also are the n² relations (PiXi = 1, (PiXj) = 0, (PiPj) = 0. Conversely, if X1, ... Xn be independent functions, each homogeneous of zero dimension in p1, ... pn satisfying the ½n(n − 1) relations (XiXj) = 0, then P1, ... Pn can be uniquely determined, by solving linear algebraic equations, such that P1dX1 + ... + PndXn = p1dx1 + ... + pndxn. If now we put n + 1 for n, put z for xn+1, Z for Xn+1, Qi for -Pi/Pn+1, for i = 1, ... n, put qi for -pi/pn+1 and σ for qn+1/Qn+1, and then finally write P1, ... Pn, p1, ... pn for Q1, ... Qn, q1, ... qn, we obtain the following results: If ZX1 ... Xn, P1, ... Pn be functions of z, x1, ... xn, p1, ... pn, such that the expression dZ − P1dX1 − ... − PndXn is identically equal to σ(dz − p1dx1 − ... − pndxn), and σ not zero, then (1) the functions Z, X1, ... Xn, P1, ... Pn are independent functions of z, x1, ... xn, p1, ... pn, so that the equations z′ = Z, x′i = Xi, p′i = Pi can be solved for z, x1, ... xn, p1, ... pn and determine a transformation which we call a (non-homogeneous) contact transformation; (2) the Z, X1, ... Xn verify the ½n(n + 1) identities [ZXi] = 0, [XiXj] = 0. And the further identities

[PiXi] = σ, [PiXj] = 0, [PiZ] = σPi, [PiPj] = 0,

[Zσ] = σ dZ − σ², [Xiσ] = σ dXi , [Piσ] = dPi
dz dz dz

are also verified. Conversely, if Z, x1, ... Xn be independent functions satisfying the identities [ZXi] = 0, [XiXj] = 0, then σ, other than zero, and P1, ... Pn can be uniquely determined, by solution of algebraic equations, such that

dZ − P1dX1 − ... − PndXn = σ(dz − p1dx1 − ... − pndxn).

Finally, there is a particular case of great importance arising when σ = 1, which gives the results: (1) If U, X1, ... Xn, P1, ... Pn be 2n + 1 functions of the 2n independent variables x1, ... xn, p1, ... pn, satisfying the identity

dU + P1dx1 + ... + PndXn = p1dx1 + ... + pndxn,

then the 2n functions P1, ... Pn, X1, ... Xn are independent, and we have

(XiXj) = 0, (XiU) = δXi, (PiXi) = 1, (PiXj) = 0, (PiPj) = 0, (PiU) + Pi = δPi,

where δ denotes the operator p1d/dp1 + ... + pnd/dpn; (2) If X1, ... Xn be independent functions of x1, ... xn, p1, ... pn, such that (XiXj) = 0, then U can be found by a quadrature, such that

(XiU) = δXi;

and when Xi, ... Xn, U satisfy these ½n(n + 1) conditions, then P1, ... Pn can be found, by solution of linear algebraic equations, to render true the identity dU + P1dX1 + ... + PndXn = p1dx1 + ... + pndxn; (3) Functions X1, ... Xn, P1, ... Pn can be found to satisfy this differential identity when U is an arbitrary given function of x1, ... xn, p1, ... pn; but this requires integrations. In order to see what integrations, it is only necessary to verify the statement that if U be an arbitrary given function of x1, ... xn, p1, ... pn, and, for r < n, X1, ... Xr be independent functions of these variables, such that (XσU) = δXσ, (XρXσ) = 0, for ρ, σ = 1 ... r, then the r + 1 homogeneous linear partial differential equations of the first order (Uƒ) + δƒ = 0, (Xρƒ) = 0, form a complete system. It will be seen that the assumptions above made for the reduction of Pfaffian expressions follow from the results here enunciated for contact transformations.

We pass on now to consider the solution of any partial differential equation of the first order; we attempt to explain certain ideas relatively to a single equation with any number of independent variables (in particular, an Partial differential equation of the first order. ordinary equation of the first order with one independent variable) by speaking of a single equation with two independent variables x, y, and one dependent variable z. It will be seen that we are naturally led to consider systems of such simultaneous equations, which we consider below. The central discovery of the transformation theory of the solution of an equation F(x, y, z, dz/dx, dz/dy) = 0 is that its solution can always be reduced to the solution of partial equations which are linear. For this, however, we must regard dz/dx, dz/dy, during the process of integration, not as the differential coefficients of a function z in regard to x and y, but as variables independent of x, y, z, the too great indefiniteness that might thus appear to be introduced being provided for in another way. We notice that if z = ψ(x, y) be a solution of the differential equation, then dz = dxdψ/dx + dydψ/dy; thus if we denote the equation by F(x, y, z, p, q,) = 0, and prescribe the condition dz = pdx + qdy for every solution, any solution such as z = ψ(x, y) will necessarily be associated with the equations p = dz/dx, q = dz/dy, and z will satisfy the equation in its original form. We have previously seen (under Pfaffian Expressions) that if five variables x, y, z, p, q, otherwise independent, be subject to dz − pdx − qdy = 0, they must in fact be subject to at least three mutual relations. If we associate with a point (x, y, z) the plane

Z − z = p(X − x) + q(Y − y)

passing through it, where X, Y, Z are current co-ordinates, and call this association a surface-element; and if two consecutive elements of which the point(x + dx, y + dy, z + dz) of one lies on the plane of the other, for which, that is, the condition dz = pdx + qdy is satisfied, be said to be connected, and an infinity of connected elements following one another continuously be called a connectivity, then our statement is that a connectivity consists of not more than ∞² elements, the whole number of elements (x, y, z, p, q) that are possible being called ∞5. The solution of an equation F(x, y, z, dz/dx, dz/dy) = 0 is then to be understood to mean finding in all possible ways, from the ∞4 elements (x, y, z, p, q) which satisfy F(x, y, z, p, q) = 0 a set of ∞² elements forming a connectivity; or, more analytically, finding in all possible ways two relations G = 0, H = 0 connecting x, y, z, p, q and independent of F = 0, so that the three relations together may involve

dz = pdx + qdy.

Such a set of three relations may, for example, be of the form z = ψ(x, y), p = dψ/dx, q = dψ/dy; but it may also, as another case, involve two relations z = ψ(y), x = ψ1(y) connecting x, y, z, the third relation being

ψ′(y) = pψ′1(y) + q,

the connectivity consisting in that case, geometrically, of a curve in space taken with ∞¹ of its tangent planes; or, finally, a connectivity is constituted by a fixed point and all the planes passing through that point. This generalized view of the meaning of a solution of F = 0 is of advantage, moreover, in view of anomalies otherwise arising from special forms of the equation Meaning of a solution of the equation. itself. For instance, we may include the case, sometimes arising when the equation to be solved is obtained by transformation from another equation, in which F does not contain either p or q. Then the equation has ∞² solutions, each consisting of an arbitrary point of the surface F = 0 and all the ∞² planes passing through this point; it also has ∞² solutions, each consisting of a curve drawn on the surface F = 0 and all the tangent planes of this curve, the whole consisting of ∞² elements; finally, it has also an isolated (or singular) solution consisting of the points of the surface, each associated with the tangent plane of the surface thereat, also ∞² elements in all. Or again, a linear equation F = Pp + Qq − R = 0, wherein P, Q, R are functions of x, y, z only, has ∞² solutions, each consisting of one of the curves defined by

dx/P = dy/Q = dz/R

taken with all the tangent planes of this curve; and the same equation has ∞² solutions, each consisting of the points of a surface containing ∞¹ of these curves and the tangent planes of this surface. And for the case of n variables there is similarly the possibility of n + 1 kinds of solution of an equation F(x1, ... xn, z, p1, ... pn) = 0; these can, however, by a simple contact transformation be reduced to one kind, in which there is only one relation z′ = ψ(x′1, ... x′n) connecting the new variables x’1, ... x′n, z′ (see under Pfaffian Expressions); just as in the case of the solution

z = ψ(y), x = ψ1(y), ψ′(y) = pψ′1(y) + q

of the equation Pp + Qq = R the transformation z’ = z − px, x′ = p, p′ = −x, y′ = y, q′ = q gives the solution

z′ = ψ(y′) + x′ψ1(y′), p′ = dz′/dx′, q′ = dz′/dy′

of the transformed equation. These explanations take no account of the possibility of p and q being infinite; this can be dealt with by writing p = -u/w, q = -v/w, and considering homogeneous equations in u, v, w, with udx + vdy + wdz = 0 as the differential relation necessary for a connectivity; in practice we use the ideas associated with such a procedure more often without the appropriate notation.

In utilizing these general notions we shall first consider the theory of characteristic chains, initiated by Cauchy, which shows well the nature of the relations implied by the given differential equation; the alternative ways of carrying Order of the ideas. out the necessary integrations are suggested by considering the method of Jacobi and Mayer, while a good summary is obtained by the formulation in terms of a Pfaffian expression.

Consider a solution of F = 0 expressed by the three independent equations F = 0, G = 0, H = 0. If it be a solution in which there is more than one relation connecting x, y, z, let new variables x′, y′, z′, p′, q′ be introduced, as before explained under Pfaffian Expressions, Characteristic chains. in which z’ is of the form

z′ = z − p1x1 − ... − psxs (s = 1 or 2),

so that the solution becomes of a form z’ = ψ(x′y′), p′ = dψ/dx′, q′ = dψ/dy′, which then will identically satisfy the transformed equations F′ = 0, G′ = 0, H′ = 0. The equation F′ = 0, if x′, y′, z′ be regarded as fixed, states that the plane Z − z′ = p′(X − x′) + q′(Y − y′) is tangent to a certain cone whose vertex is (x′, y′, z′), the consecutive point (x′ + dx′, y′ + dy′, z′ + dz′) of the generator of contact being such that

dx′/ dF′ = dy′/ dF′ = dz′/ ( p′ dF′ + q′ dF′ ).
dp′ dq′ dp′ dq′

Passing in this direction on the surface z′ = ψ(x′, y′) the tangent plane of the surface at this consecutive point is (p′ + dp′, q′ + dq′), where, since F′(x′, y′, ψ, dψ/dx′, dψ/dy′) = 0 is identical, we have dx′ (dF′/dx′ + p′dF′/dz′) + dp′dF′/dp′ = 0. Thus the equations, which we shall call the characteristic equations,

dx′/ dF′ = dy′/ dF′ = dz′/ ( p′ dF′ + q′ dF′ ) = dp′/ ( dF′ − p′ dF′ ) = dq′/ ( dF′ − q′ dF′ )
dp′ dq′ dp′ dq′ dx′ dz′ dy′ dz′

are satisfied along a connectivity of ∞¹ elements consisting of a curve on z′ = ψ(x′, y′) and the tangent planes of the surface along this curve. The equation F′ = 0, when p′, q′ are fixed, represents a curve in the plane Z − z′ = p′(X − x′) + q′(Y − y′) passing through (x′, y′, z′); if (x′ + δx′, y′ + δy′, z′ + δz′) be a consecutive point of this curve, we find at once

δx′( dF′ + p′ dF′ ) + δy′( dF′ q′ dF′ ) = 0;
dx′ dz′ dy′ dz′

thus the equations above give δx′dp′ + δy′dq′ = 0, or the tangent line of the plane curve, is, on the surface z′ = ψ(x′, y′), in a direction conjugate to that of the generator of the cone. Putting each of the fractions in the characteristic equations equal to dt, the equations enable us, starting from an arbitrary element x′0, y′0, z′0, p′0, q′0, about which all the quantities F′, dF′/dp′, &c., occurring in the denominators, are developable, to define, from the differential equation F′ = 0 alone, a connectivity of ∞¹ elements, which we call a characteristic chain; and it is remarkable that when we transform again to the original variables (x, y, z, p, q), the form of the differential equations for the chain is unaltered, so that they can be written down at once from the equation F = 0. Thus we have proved that the characteristic chain starting from any ordinary element of any integral of this equation F = 0 consists only of elements belonging to this integral. For instance, if the equation do not contain p, q, the characteristic chain, starting from an arbitrary plane through an arbitrary point of the surface F = 0, consists of a pencil of planes whose axis is a tangent line of the surface F = 0. Or if F = 0 be of the form Pp + Qq = R, the chain consists of a curve satisfying dx/P = dy/Q = dz/R and a single infinity of tangent planes of this curve, determined by the tangent plane chosen at the initial point. In all cases there are ∞³ characteristic chains, whose aggregate may therefore be expected to exhaust the ∞4 elements satisfying F = 0.

Consider, in fact, a single infinity of connected elements each satisfying F = 0, say a chain connectivity T, consisting of elements specified by x0, y0, z0, p0, q0, which we suppose expressed as Complete integral constructed with characteristic chains. functions of a parameter u, so that

U0 = dz0/du − p0dx0/du − q0dy0/du

is everywhere zero on this chain; further, suppose that each of F, dF/dp, ... , dF/dx + pdF/dz is developable about each element of this chain T, and that T is not a characteristic chain. Then consider the aggregate of the characteristic chains issuing from all the elements of T. The ∞² elements, consisting of the aggregate of these characteristic chains, satisfy F = 0, provided the chain connectivity T consists of elements satisfying F = 0; for each characteristic chain satisfies dF = 0. It can be shown that these chains are connected; in other words, that if x, y, z, p, q, be any element of one of these characteristic chains, not only is

dz/dt − pdx/dt − qdy/dt = 0,

as we know, but also U = dz/du − pdx/du − qdy/du is also zero. For we have

dU = d ( dz − p dx − q dy ) d ( dz − p dx − q dy )
dt dt du du du du dt dt dt
= dp dx dp dx + dq dy dq dy ,
du dt dt du du dt dt du

which is equal to

dp dF + dx ( dF + p dF ) + dq dF + dy ( dF + q dF ) = − dF U.
du dp du dx dz du dq du dy dz dz

As dF/dz is a developable function of t, this, giving

U = U0 exp( t dF dt ),
t0 dz

shows that U is everywhere zero. Thus integrals of F = 0 are obtainable by considering the aggregate of characteristic chains issuing from arbitrary chain connectivities T satisfying F = 0; and such connectivities T are, it is seen at once, determinable without integration. Conversely, as such a chain connectivity T can be taken out from the elements of any given integral all possible integrals are obtainable in this way. For instance, an arbitrary curve in space, given by x0 = θ(u), y0 = φ(u), z0 = ψ(u), determines by the two equations F(x0, y0, z0, p0, q0) = 0, ψ′(u) = p0θ′(u) + q0φ′(u), such a chain connectivity T, through which there passes a perfectly definite integral of the equation F = 0. By taking ∞² initial chain connectivities T, as for instance by taking the curves x0 = θ, y0 = φ, z0 = ψ to be the ∞² curves upon an arbitrary surface, we thus obtain ∞² integrals, and so ∞4 elements satisfying F = 0. In general, if functions G, H, independent of F, be obtained, such that the equations F = 0, G = b, H = c represent an integral for all values of the constants b, c, these equations are said to constitute a complete integral. Then ∞4 elements satisfying F = 0 are known, and in fact every other form of integral can be obtained without further integrations.

In the foregoing discussion of the differential equations of a characteristic chain, the denominators dF/dp, ... may be supposed to be modified in form by means of F = 0 in any way conducive to a simple integration. In the immediately following explanation of ideas, however, we consider indifferently all equations F = constant; when a function of x, y, z, p, q is said to be zero, it is meant that this is so identically, not in virtue of F = 0; in other words, we consider the integration of F = a, where a is an arbitrary constant. In the theory of linear partial equations we have seen that the integration Operations necessary for integration of F = a. of the equations of the characteristic chains, from which, as has just been seen, that of the equation F = a follows at once, would be involved in completely integrating the single linear homogeneous partial differential equation of the first order [Fƒ] = 0 where the notation is that explained above under Contact Transformations. One obvious integral is ƒ = F. Putting F = a, where a is arbitrary, and eliminating one of the independent variables, we can reduce this equation [Fƒ] = 0 to one in four variables; and so on. Calling, then, the determination of a single integral of a single homogeneous partial differential equation of the first order in n independent variables, an operation of order n − 1, the characteristic chains, and therefore the most general integral of F = a, can be obtained by successive operations of orders 3, 2, 1. If, however, an integral of F = a be represented by F = a, G = b, H = c, where b and c are arbitrary constants, the expression of the fact that a characteristic chain of F = a satisfies dG = 0, gives [FG] = 0; similarly, [FH] = 0 and [GH] = 0, these three relations being identically true. Conversely, suppose that an integral G, independent of F, has been obtained of the equation [Fƒ] = 0, which is an operation of order three. Then it follows from the identity [ƒ[φψ]] + [φ[ψƒ]] + [ψ[ƒφ]] = dƒ/dz [ψφ] + dφ/dz [ψƒ] + dψ/dz [ƒφ] before remarked, by putting φ = F, ψ = G, and then [Fƒ] = A(ƒ), [Gƒ] = B(ƒ), that AB(ƒ) − BA(ƒ) = dF/dz B(ƒ) − dG/dz A(ƒ), so that the two linear equations [Fƒ] = 0, [Gƒ] = 0 form a complete system; as two integrals F, G are known, they have a common integral H, independent of F, G, determinable by an operation of order one only. The three functions F, G, H thus identically satisfy the relations [FG] = [GH] = [FH] = 0. The ∞² elements satisfying F = a, G = b, H = c, wherein a, b, c are assigned constants, can then be seen to constitute an integral of F = a. For the conditions that a characteristic chain of G = b issuing from an element satisfying F = a, G = b, H = c should consist only of elements satisfying these three equations are simply [FG] = 0, [GH] = 0. Thus, starting from an arbitrary element of (F = a, G = b, H = c), we can single out a connectivity of elements of (F = a, G = b, H = c) forming a characteristic chain of G = b; then the aggregate of the characteristic chains of F = a issuing from the elements of this characteristic chain of G = b will be a connectivity consisting only of elements of

(F = a, G = b, H = c),

and will therefore constitute an integral of F = a; further, it will include all elements of (F = a, G = b, H = c). This result follows also from a theorem given under Contact Transformations, which shows, moreover, that though the characteristic chains of F = a are not determined by the three equations F = a, G = b, H = c, no further integration is now necessary to find them. By this theorem, since identically [FG] = [GH] = [FH] = 0, we can find, by the solution of linear algebraic equations only, a non-vanishing function σ and two functions A, C, such that

dG − AdF − CdH = σ(dz − pdz − qdy);

thus all the elements satisfying F = a, G = b, H = c, satisfy dz = pdx + qdy and constitute a connectivity, which is therefore an integral of F = a. While, further, from the associated theorems, F, G, H, A, C are independent functions and [FC] = 0. Thus C may be taken to be the remaining integral independent of G, H, of the equation [Fƒ] = 0, whereby the characteristic chains are entirely determined.

When we consider the particular equation F = 0, neglecting the case when neither p nor q enters, and supposing p to enter, we may express p from F = 0 in terms of x, y, z, q, and then eliminate it from all other equations. Then instead of the equation [Fƒ] = 0, we have, if F = 0 give p = ψ(x, y, z, q), the equation

Σƒ = − ( + ψ ) + ( + q )( + q ) = 0,
dx dz dq dy dz dy dz dq

moreover obtainable by omitting the term in dƒ/dp in [p − ψ, ƒ] = 0. Let x0, y0, z0, q0, be values about which the coefficients in The single equation F = 0 and Pfaffian formulations. this equation are developable, and let ζ, η, ω be the principal solutions reducing respectively to z, y and q when x = x0. Then the equations p = ψ, ζ = z0, η = y0, ω = q0 represent a characteristic chain issuing from the element x0, y0, z0, ψ0, q0; we have seen that the aggregate of such chains issuing from the elements of an arbitrary chain satisfying

dz0 = p0dx0 − q0dy0 = 0

constitute an integral of the equation p = ψ. Let this arbitrary chain be taken so that x0 is constant; then the condition for initial values is only

dz0 − q0dy0 = 0,

and the elements of the integral constituted by the characteristic chains issuing therefrom satisfy

dζ − ωdη = 0.

Hence this equation involves dz − ψdx − qdy = 0, or we have

dz − ψdx − qdy = σ(dζ − ωdη),

where σ is not zero. Conversely, the integration of p = ψ is, essentially, the problem of writing the expression dz − ψdx − qdy in the form σ(dζ − ωdη), as must be possible (from what was said under Pfaffian Expressions).

To integrate a system of simultaneous equations of the first order X1 = a1, ... Xr = ar in n independent variables x1, ... xn and one dependent variable z, we write p1 for dz/dx1, &c., System of equations of the first order. and attempt to find n + 1 − r further functions Z, Xr+1 ... Xn, such that the equations Z = a, Xi = ai,(i = 1, ... n) involve dz − p1dx1 − ... − pndxn = 0. By an argument already given, the common integral, if existent, must be satisfied by the equations of the characteristic chains of any one equation Xi = ai; thus each of the expressions [XiXj] must vanish in virtue of the equations expressing the integral, and we may without loss of generality assume that each of the corresponding ½r(r − 1) expressions formed from the r given differential equations vanishes in virtue of these equations. The determination of the remaining n + 1 − r functions may, as before, be made to depend on characteristic chains, which in this case, however, are manifolds of r dimensions obtained by integrating the equations [X1ƒ] = 0, ... [Xrƒ] = 0; or having obtained one integral of this system other than X1, ... Xr, say Xr+1, we may consider the system [X1ƒ] = 0, ... [Xr+1ƒ] = 0, for which, again, we have a choice; and at any stage we may use Mayer’s method and reduce the simultaneous linear equations to one equation involving parameters; while if at any stage of the process we find some but not all of the integrals of the simultaneous system, they can be used to simplify the remaining work; this can only be clearly explained in connexion with the theory of so-called function groups for which we have no space. One result arising is that the simultaneous system p1 = φ1, ... pr = φr, wherein p1, ... pr are not involved in φ1, ... φr, if it satisfies the ½r(r − 1) relations [pi − φi, pj − φj] = 0, has a solution z = ψ(x1, ... xn), p1 = dψ/dx1, ... pn = dψ/dxn, reducing to an arbitrary function of xr+1, ... xn only, when x1 = xº1, ... xr = xºr under certain conditions as to developability; a generalization of the theorem for linear equations. The problem of integration of this system is, as before, to put

dz − φ1dx1 − ... − φrdxr − pr+1dxr+1 − ... − pndxn

into the form σ(dζ − ωr+1 + dξr+1 − ... − ωnn); and here ζ, ξr+1, ... ξn, ωr+1, ... ωn may be taken, as before, to be principal integrals of a certain complete system of linear equations; those, namely, determining the characteristic chains.

If L be a function of t and of the 2n quantities x1, ... xn, ẋ1, ... ẋn, where ẋi, denotes dxi/dt, &c., and if in the n equations

d ( dL ) = dL
dt dxi dxi

we put pi = dL/dẋi, and so express ẋi, ... ẋn in terms of t, xi, ... xn, p1, ... pn, assuming that the determinant of the quantities d²L/dxidẋj is not zero; if, further, H denote the function of t, x1, ... xn, p1, ... pn, numerically equal to p11 + ... + pnn − L, it is easy Equations of dynamics. to prove that dpi/dt = −dH/dxi, dxi/dt = dH/dpi. These so-called canonical equations form part of those for the characteristic chains of the single partial equation dz/dt + H(t, x1, ... xn, dz/dx1, ..., dz/dxn) = 0, to which then the solution of the original equations for x1 ... xn can be reduced. It may be shown (1) that if z = ψ(t, x1, ... xn, c1, .. cn) + c be a complete integral of this equation, then pi = dψ/dxi, dψ/dci = ei are 2n equations giving the solution of the canonical equations referred to, where c1 ... cn and e1, ... en are arbitrary constants; (2) that if xi = Xi(t, x01, ... pºn), pi = Pi(t, xº1, ... p0n) be the principal solutions of the canonical equations for t = t0, and ω denote the result of substituting these values in p1dH/dp1 + ... + pndH/dpn − H, and Ω = ∫tt0 ωdt, where, after integration, Ω is to be expressed as a function of t, x1, ... xn, xº1, ... xºn, then z = Ω + z0 is a complete integral of the partial equation.

A system of differential equations is said to allow a certain continuous group of transformations (see Groups, Theory of) when the introduction for the variables in the differential equations of the new variables given by the Application of theory of continuous groups to formal theories. equations of the group leads, for all values of the parameters of the group, to the same differential equations in the new variables. It would be interesting to verify in examples that this is the case in at least the majority of the differential equations which are known to be integrable in finite terms. We give a theorem of very general application for the case of a simultaneous complete system of linear partial homogeneous differential equations of the first order, to the solution of which the various differential equations discussed have been reduced. It will be enough to consider whether the given differential equations allow the infinitesimal transformations of the group.

It can be shown easily that sufficient conditions in order that a complete system Π1ƒ = 0 ... Πkƒ = 0, in n independent variables, should allow the infinitesimal transformation Pƒ = 0 are expressed by k equations ΠiPƒ − PΠiƒ = λi1Π1ƒ + ... + λikΠkƒ. Suppose now a complete system of n − r equations in n variables to allow a group of r infinitesimal transformations (P1f, ..., Prƒ) which has an invariant subgroup of r − 1 parameters (P1ƒ, ..., Pr-1ƒ), it being supposed that the n quantities Π1ƒ, ..., Πn-rƒ, P1ƒ, ..., Prƒ are not connected by an identical linear equation (with coefficients even depending on the independent variables). Then it can be shown that one solution of the complete system is determinable by a quadrature. For each of ΠiPσƒ − PσΠif is a linear function of Π1ƒ, ..., Πn-rƒ and the simultaneous system of independent equations Π1ƒ = 0, ... Πn-rƒ = 0, P1ƒ = 0, ... Pr-1ƒ = 0 is therefore a complete system, allowing the infinitesimal transformation Prƒ. This complete system of n − 1 equations has therefore one common solution ω, and Pr(ω) is a function of ω. By choosing ω suitably, we can then make Pr(ω) = 1. From this equation and the n − 1 equations Πiω = 0, Pσω = 0, we can determine ω by a quadrature only. Hence can be deduced a much more general result, that if the group of r parameters be integrable, the complete system can be entirety solved by quadratures; it is only necessary to introduce the solution found by the first quadrature as an independent variable, whereby we obtain a complete system of n − r equations in n − 1 variables, subject to an integrable group of r − 1 parameters, and to continue this process. We give some examples of the application of the theorem. (1) If an equation of the first order y′ = ψ(x, y) allow the infinitesimal transformation ξdƒ/dx + ηdƒ/dy, the integral curves ω(x, y) = y0, wherein ω(x, y) is the solution of dƒ/dx + ψ(x, y) dƒ/dy = 0 reducing to y for x = x0, are interchanged among themselves by the infinitesimal transformation, or ω(x, y) can be chosen to make ξdω/dx + ηdω/dy = 1; this, with dω/dx + ψdω/dy = 0, determines ω as the integral of the complete differential (dy − ψdx)/(η − ψξ). This result itself shows that every ordinary differential equation of the first order is subject to an infinite number of infinitesimal transformations. But every infinitesimal transformation ξdƒ/dx + ηdƒ/dy can by change of variables (after integration) be brought to the form dƒ/dy, and all differential equations of the first order allowing this group can then be reduced to the form F(x, dy/dx) = 0. (2) In an ordinary equation of the second order y” = ψ(x, y, y′), equivalent to dy/dx = y1, dy1/dx = ψ(x, y, y1), if H, H1 be the solutions for y and y1 chosen to reduce to y0 and yº1 when x = x0, and the equations H = y, H1= y1 be equivalent to ω = y0, ω1 = yº1, then ω, ω1 are the principal solutions of Πƒ = dƒ/dx + y1dƒ/dy + ψdƒ/dy1 = 0. If the original equation allow an infinitesimal transformation whose first extended form (see Groups) is Pƒ = ξdƒ/dx + ηdƒ/dy + η1dƒ/dy1, where η1δt is the increment of dy/dx when ξδt, ηδt are the increments of x, y, and is to be expressed in terms of x, y, y1, then each of Pω and Pω1 must be functions of ω and ω1, or the partial differential equation Πƒ must allow the group Pƒ. Thus by our general theorem, if the differential equation allow a group of two parameters (and such a group is always integrable), it can be solved by quadratures, our explanation sufficing, however, only provided the form Πƒ and the two infinitesimal transformations are not linearly connected. It can be shown, from the fact that η1 is a quadratic polynomial in y1, that no differential equation of the second order can allow more than 8 really independent infinitesimal transformations, and that every homogeneous linear differential equation of the second order allows just 8, being in fact reducible to d²y/dx² = 0. Since every group of more than two parameters has subgroups of two parameters, a differential equation of the second order allowing a group of more than two parameters can, as a rule, be solved by quadratures. By transforming the group we see that if a differential equation of the second order allows a single infinitesimal transformation, it can be transformed to the form F(x, dγ/dx, d²γ/dx²); this is not the case for every differential equation of the second order. (3) For an ordinary differential equation of the third order, allowing an integrable group of three parameters whose infinitesimal transformations are not linearly connected with the partial equation to which the solution of the given ordinary equation is reducible, the similar result follows that it can be integrated by quadratures. But if the group of three parameters be simple, this result must be replaced by the statement that the integration is reducible to quadratures and that of a so-called Riccati equation of the first order, of the form dy/dx = A + By + Cy², where A, B, C are functions of x. (4) Similarly for the integration by quadratures of an ordinary equation yn = ψ(x, y, y1, ... yn-1) of any order. Moreover, the group allowed by the equation may quite well consist of extended contact transformations. An important application is to the case where the differential equation is the resolvent equation defining the group of transformations or rationality group of another differential equation (see below); in particular, when the rationality group of an ordinary linear differential equation is integrable, the equation can be solved by quadratures.

Following the practical and provisional division of theories of differential equations, to which we alluded at starting, into transformation theories and function theories, we pass now to give some account of the latter. These are both Consideration of function theories of differential equations. a necessary logical complement of the former, and the only remaining resource when the expedients of the former have been exhausted. While in the former investigations we have dealt only with values of the independent variables about which the functions are developable, the leading idea now becomes, as was long ago remarked by G. Green, the consideration of the neighbourhood of the values of the variables for which this developable character ceases. Beginning, as before, with existence theorems applicable for ordinary values of the variables, we are to consider the cases of failure of such theorems.

When in a given set of differential equations the number of equations is greater than the number of dependent variables, the equations cannot be expected to have common solutions unless certain conditions of compatibility, obtainable by equating different forms of the same differential coefficients deducible from the equations, are satisfied. We have had examples in systems of linear equations, and in the case of a set of equations p1 = φ1, ..., pr = φr. For the case when the number of equations is the same as that of dependent variables, the following is a general theorem which should be referred to: Let there be r equations in r dependent variables z1, ... zr and n independent A general existence theorem. variables x1, ... xn; let the differential coefficient of zσ of highest order which enters be of order hσ, and suppose dzσ / dx1 to enter, so that the equations can be written dzσ / dx1 = Φσ, where in the general differential coefficient of zρ which enters in Φσ, say

dk1 + ... + kn zρ / dx1k1 ... dxnkn,

we have k1 < hρ and k1 + ... + kn ≤ hρ. Let a1, ... an, b1, ... br, and bρk1 ... kn be a set of values of

x1, ... xn, z1, ... zr

and of the differential coefficients entering in Φσ about which all the functions Φ1, ... Φr, are developable. Corresponding to each dependent variable zσ, we take now a set of hσ functions of x2, ... xn, say φσ, φσ;(1), ... ,φσh−1 arbitrary save that they must be developable about a2, a3, ... an, and such that for these values of x2, ... xn, the function φρ reduces to bρ, and the differential coefficient

dk2 + ... + kn φρk1 / dx2k2 ... dxnkn

reduces to bρk1 ... kn. Then the theorem is that there exists one, and only one, set of functions z1, ... zr, of x2, ... xn developable about a1, ... an satisfying the given differential equations, and such that for x1 = a1 we have

zσ = φσ, dzσ / dx1 = φσ(1), ... dhσ−1zσ / dhσ−1x1 = φσhσ−1.

And, moreover, if the arbitrary functions φσ, φσ(1) ... contain a certain number of arbitrary variables t1, ... tm, and be developable about the values tº1, ... tºm of these variables, the solutions z1, ... zr will contain t1, ... tm, and be developable about tº1, ... tºm.

The proof of this theorem may be given by showing that if ordinary power series in x1 − a1, ... xn − an, t1 − tº1, ... tm − tºm be substituted in the equations wherein in zσ the coefficients of (x1 − a1)º, x1 − a1, ..., (x1 − a1)hσ−1 are the arbitrary functions φσ, φσ(1), ..., φσh−1, divided respectively by 1, 1!, 2!, &c., then the differential equations determine uniquely all the other coefficients, and that the resulting series are convergent. We rely, in fact, upon the theory of monogenic analytical functions (see Function), a function being determined entirely by its development in the neighbourhood of one set of values of the independent variables, from which all its other values arise by continuation; it being of course understood that the coefficients in the differential equations are to be continued at the same time. But it is to be remarked that there is no ground for believing, if this method of continuation be utilized, that the function is single-valued; we may quite well return to the same values of the independent variables with a different Singular points of solutions. value of the function; belonging, as we say, to a different branch of the function; and there is even no reason for assuming that the number of branches is finite, or that different branches have the same singular points and regions of existence. Moreover, and this is the most difficult consideration of all, all these circumstances may be dependent upon the values supposed given to the arbitrary constants of the integral; in other words, the singular points may be either fixed, being determined by the differential equations themselves, or they may be movable with the variation of the arbitrary constants of integration. Such difficulties arise even in establishing the reversion of an elliptic integral, in solving the equation

(dx/ds)² = (x − a1)(x − a2)(x − a3)(x − a4);

about an ordinary value the right side is developable; if we put x − a1 = t1², the right side becomes developable about t1 = 0; if we put x = 1/t, the right side of the changed equation is developable about t = 0; it is quite easy to show that the integral reducing to a definite value x0 for a value s0 is obtainable by a series in integral powers; this, however, must be supplemented by showing that for no value of s does the value of x become entirely undetermined.

These remarks will show the place of the theory now to be sketched of a particular class of ordinary linear homogeneous Linear differential equations with rational coefficients. differential equations whose importance arises from the completeness and generality with which they can be discussed. We have seen that if in the equations

dy/dx = y1, dy1/dx = y2, ..., dyn−2/dx = yn−1,
dyn−1/dx = any + an−1y1 + ... + a1yn−1,

where a1, a2, ..., an are now to be taken to be rational functions of x, the value x = xº be one for which no one of these rational functions is infinite, and yº, yº1, ..., yºn−1 be quite arbitrary finite values, then the equations are satisfied by

y = yºu + yº1u1 + ... + yºn−1un−1,

where u, u1, ..., un−1 are functions of x, independent of yº, ... yºn−1, developable about x = xº; this value of y is such that for x = xº the functions y, y1 ... yn−1 reduce respectively to yº, yº1, ... yºn−1; it can be proved that the region of existence of these series extends within a circle centre xº and radius equal to the distance from xº of the nearest point at which one of a1, ... an becomes infinite. Now consider a region enclosing xº and only one of the places, say Σ, at which one of a1, ... an becomes infinite. When x is made to describe a closed curve in this region, including this point Σ in its interior, it may well happen that the continuations of the functions u, u1, ..., un−1 give, when we have returned to the point x, values v, v1, ..., vn−1, so that the integral under consideration becomes changed to yº + yº1v1 + ... + yºn−1vn−1. At xº let this branch and the corresponding values of y1, ... yn−1 be ηº, ηº1, ... ηºn−1; then, as there is only one series satisfying the equation and reducing to (ηº, ηº1, ... ηºn−1) for x = xº and the coefficients in the differential equation are single-valued functions, we must have ηºu + ηº1u1 + ... + ηºn−1un−1 = yºv + yº1v1 + ... + yºn−1vn−1; as this holds for arbitrary values of yº ... yºn−1, upon which u, ... un−1 and v, ... vn−1 do not depend, it follows that each of v, ... vn−1 is a linear function of u, ... un−1 with constant coefficients, say vi = Ai1u + ... + Ainun−1. Then

yºv + ... + yºn−1vn−1 = (Σi Ai1i)u + ... + (Σi Aini) un−1;

this is equal to μ(yºu + ... + yºn−1un−1) if Σi Airi = μyºr−1; eliminating yº ... yºn−1 from these linear equations, we have a determinantal equation of order n for μ; let μ1 be one of its roots; determining the ratios of yº, y1º, ... yºn−1 to satisfy the linear equations, we have thus proved that there exists an integral, H, of the equation, which when continued round the point Σ and back to the starting-point, becomes changed to H1 = μ1H. Let now ξ be the value of x at Σ and r1 one of the values of (½πi) log μ1; consider the function (x − ξ)−r1H; when x makes a circuit round x = ξ, this becomes changed to

exp (-2πir1) (x − ξ)−r1 μH,

that is, is unchanged; thus we may put H = (x − ξ)r1φ1, φ1 being a function single-valued for paths in the region considered described about Σ, and therefore, by Laurent’s Theorem (see Function), capable of expression in the annular region about this point by a series of positive and negative integral powers of x − ξ, which in general may contain an infinite number of negative powers; there is, however, no reason to suppose r1 to be an integer, or even real. Thus, if all the roots of the determinantal equation in μ are different, we obtain n integrals of the forms (x − ξ)r1φ1, ..., (x − ξ)rnφn. In general we obtain as many integrals of this form as there are really different roots; and the problem arises to discover, in case a root be k times repeated, k − 1 equations of as simple a form as possible to replace the k − 1 equations of the form yº + ... + yºn−1vn−1 = μ(yº + ... + yºn−1un−1) which would have existed had the roots been different. The most natural method of obtaining a suggestion lies probably in remarking that if r2 = r1 + h, there is an integral [(x − ξ)r1 + hφ2 − (x − ξ)r1φ1] / h, where the coefficients in φ2 are the same functions of r1 + h as are the coefficients in φ1 of r1; when h vanishes, this integral takes the form

(x − ξ)r1 [dφ1/dr1 + φ1 log (x − ξ)],

or say

(x − ξ)r11 + ψ1 log (x − ξ)];

denoting this by 2πiμ1K, and (x − ξ)r1 φ1 by H, a circuit of the point ξ changes K into

K′ = 1 [e2πir1 (x − ξ)r1 ψ1 + e2πir j (x − ξ)r1 φ1 (2πi + log(x − ξ) )] = μ1K + H.
2πiμ1

A similar artifice suggests itself when three of the roots of the determinantal equation are the same, and so on. We are thus led to the result, which is justified by an examination of the algebraic conditions, that whatever may be the circumstances as to the roots of the determinantal equation, n integrals exist, breaking up into batches, the values of the constituents H1, H2, ... of a batch after circuit about x = ξ being H1′ = μ1H1, H2′ = μ1H2 + H1, H3′ = μ1H3 + H2, and so on. And this is found to lead to the forms (x − ξ)r1φ1, (x − ξ)r11 + φ1 log (x − ξ)], (x − ξ)r11 + χ2 log (x − ξ) + φ1(log(x − ξ) )²], and so on. Here each of φ1, ψ1, χ1, χ2, ... is a series of positive and negative integral powers of x − ξ in which the number of negative powers may be infinite.

It appears natural enough now to inquire whether, under proper conditions for the forms of the rational functions a1, ... an, it may be possible to ensure that in each of the series φ1, ψ1, [chi]1, ... the number of negative powers shall be finite. Herein Regular equations. lies, in fact, the limitation which experience has shown to be justified by the completeness of the results obtained. Assuming n integrals in which in each of φ1, ψ1, χ1 ... the number of negative powers is finite, there is a definite homogeneous linear differential equation having these integrals; this is found by forming it to have the form

y′ n = (x − ξ)−1 b1y′ (n−1) + (x − ξ)−2 b2y′ (n−2) + ... + (x − ξ)−n bny,

where b1, ... bn are finite for x = ξ. Conversely, assume the equation to have this form. Then on substituting a series of the form (x − ξ)r [1 + A1(x − ξ) + A2(x − ξ)² + ... ] and equating the coefficients of like powers of x − ξ, it is found that r must be a root of an algebraic equation of order n; this equation, which we shall call the index equation, can be obtained at once by substituting for y only (x − ξ)r and replacing each of b1, ... bn by their values at x = ξ; arrange the roots r1, r2, ... of this equation so that the real part of ri is equal to, or greater than, the real part of ri+1, and take r equal to r1; it is found that the coefficients A1, A2 ... are uniquely determinate, and that the series converges within a circle about x = ξ which includes no other of the points at which the rational functions a1 ... an become infinite. We have thus a solution H1 = (x − ξ)r1φ1 of the differential equation. If we now substitute in the equation y = H1∫ηdx, it is found to reduce to an equation of order n − 1 for η of the form

η′ (n−1) = (x − ξ)−1 c1η′ (n−2) + ... + (x − ξ)(n−1) cn−1η,

where c1, ... cn−1 are not infinite at x = ξ. To this equation precisely similar reasoning can then be applied; its index equation has in fact the roots r2 − r1 − 1, ..., rn − r1 − 1; if r2 − r1 be zero, the integral (x − ξ)−1ψ1 of the η equation will give an integral of the original equation containing log (x − ξ); if r2 − r1 be an integer, and therefore a negative integer, the same will be true, unless in ψ1 the term in (x − ξ)r1 − r2 be absent; if neither of these arise, the original equation will have an integral (x − ξ)r2φ2. The η equation can now, by means of the one integral of it belonging to the index r2 − r1 − 1, be similarly reduced to one of order n − 2, and so on. The result will be that stated above. We shall say that an equation of the form in question is regular about x = ξ.

We may examine in this way the behaviour of the integrals at all the points at which any one of the rational functions a1 ... an becomes infinite; in general we must expect that beside these the value x = ∞ will be a singular point for the Fuchsian equations. solutions of the differential equation. To test this we put x = 1/t throughout, and examine as before at t = 0. For instance, the ordinary linear equation with constant coefficients has no singular point for finite values of x; at x = ∞ it has a singular point and is not regular; or again, Bessel’s equation x²y″ + xy′ + (x² − n²)y = 0 is regular about x = 0, but not about x = ∞. An equation regular at all the finite singularities and also at x = ∞ is called a Fuchsian equation. We proceed to examine particularly the case of an equation of the second order

y″ + ay′ + by = 0.

Putting x = 1/t, it becomes

d²y/dt² + (2t−1 − at−2) dy/dt + bt−4 y = 0,

which is not regular about t = 0 unless 2 − at−1 and bt−2, that is, unless ax and bx² are finite at x = ∞; which we thus assume; putting y = tr(1 + A1t + ... ), we find for the index equation at x = ∞ the equation r(r − 1) + r(2 − ax)0 + (bx²)0 = 0. If there be Equation of the second order. finite singular points at ξ1, ... ξm, where we assume m > 1, the cases m = 0, m = 1 being easily dealt with, and if φ(x) = (x − ξ1) ... (x − ξm), we must have a·φ(x) and b·[φ(x)]² finite for all finite values of x, equal say to the respective polynomials ψ(x) and θ(x), of which by the conditions at x = ∞ the highest respective orders possible are m − 1 and 2(m − 1). The index equation at x = ξ1 is r(r − 1) + rψ(ξ1) / φ′ (ξ1) + θ(ξ)1 / [φ′(ξ1)]² = 0, and if α1, β1 be its roots, we have α1 + β1 = 1 − ψ(ξ1) / φ′ (ξ1) and α1β1 = θ(ξ)1 / [φ′(ξ1)]². Thus by an elementary theorem of algebra, the sum Σ(1 − αi − βi) / (x − ξi), extended to the m finite singular points, is equal to ψ(x) / φ(x), and the sum Σ(1 − αi − βi) is equal to the ratio of the coefficients of the highest powers of x in ψ(x) and φ(x), and therefore equal to 1 + α + β, where α, β are the indices at x = ∞. Further, if (x, 1)m−2 denote the integral part of the quotient θ(x) / φ(x), we have Σ αiβiφ′ (ξi) / (x = ξi) equal to −(x, 1)m−2 + θ(x)/φ(x), and the coefficient of xm−2 in (x, 1)m−2 is αβ. Thus the differential equation has the form

y″ + y′Σ (1 − αi − βi) / (x − ξi) + y[(x, 1)m-2 + Σ αiβiφ′(ξi) / (x − ξi)]/φ(x) = 0.

If, however, we make a change in the dependent variable, putting y = (x − ξ1)α1 ... (x − ξm)α mη, it is easy to see that the equation changes into one having the same singular points about each of which it is regular, and that the indices at x = ξi become 0 and βi − αi, which we shall denote by λi, for (x − ξi)αj can be developed in positive integral powers of x − ξi about x = ξi; by this transformation the indices at x = ∞ are changed to

α + α1 + ... + αm, β + β1 + ... + βm

which we shall denote by λ, μ. If we suppose this change to have been introduced, and still denote the independent variable by y, the equation has the form

y″ + y′Σ (1 − λi) / (x − ξi) + y(x, 1)m−2 / φ(x) = 0,

while λ + μ + λ1 + ... + λm = m − 1. Conversely, it is easy to verify that if λμ be the coefficient of xm−2 in (x, 1)m−2, this equation has the specified singular points and indices whatever be the other coefficients in (x, 1)m−2.

Thus we see that (beside the cases m = 0, m = 1) the “Fuchsian equation” of the second order with two finite singular points is distinguished by the fact that it has a definite form when the singular points and the indices are assigned. Hypergeometric equation. In that case, putting (x − ξ1) / (x − ξ2) = t / (t − 1), the singular points are transformed to 0, 1, ∞, and, as is clear, without change of indices. Still denoting the independent variable by x, the equation then has the form

x(1 − x)y″ + y′[1 − λ1 − x(1 + λ + μ)] − λμy = 0,

which is the ordinary hypergeometric equation. Provided none of λ1, λ2, λ − μ be zero or integral about x = 0, it has the solutions

F(λ, μ, 1 − λ1, x), xλ1 F(λ + λ1, μ + λ1, 1 + λ1, x);

about x = 1 it has the solutions

F(λ, μ, 1 − λ2, 1 − x), (1 − x)λ2 F(λ + λ2, μ + λ2, 1 + λ2, 1 − x),

where λ + μ + λ1 + λ2 = 1; about x = ∞ it has the solutions

x−λ F(λ, λ + λ1, λ − μ + 1, x−1), x−μ F(μ, μ + λ1, μ − λ + 1, x−1),

where F(α, β, γ, x) is the series

1 + αβx + α(α + 1)β(β + 1)x² ...,
γ 1·2·γ(γ + 1)

which converges when |x| < 1, whatever α, β, γ may be, converges for all values of x for which |x| = 1 provided the real part of γ − α − β < 0 algebraically, and converges for all these values except x = 1 provided the real part of γ − α − β > −1 algebraically.

In accordance with our general theory, logarithms are to be expected in the solution when one of λ1, λ2, λ − μ is zero or integral. Indeed when λ1 is a negative integer, not zero, the second solution about x = 0 would contain vanishing factors in the denominators of its coefficients; in case λ or μ be one of the positive integers 1, 2, ... (−λ1), vanishing factors occur also in the numerators; and then, in fact, the second solution about x = 0 becomes xλ1 times an integral polynomial of degree (−λ1) − λ or of degree (−λ1) − μ. But when λ1 is a negative integer including zero, and neither λ nor μ is one of the positive integers 1, 2 ... (−λ1), the second solution about x = 0 involves a term having the factor log x. When λ1 is a positive integer, not zero, the second solution about x = 0 persists as a solution, in accordance with the order of arrangement of the roots of the index equation in our theory; the first solution is then replaced by an integral polynomial of degree -λ or −μ1, when λ or μ is one of the negative integers 0, −1, −2, ..., 1 − λ1, but otherwise contains a logarithm. Similarly for the solutions about x = 1 or x = ∞; it will be seen below how the results are deducible from those for x = 0.

Denote now the solutions about x = 0 by u1, u2; those about x = 1 by v1, v2; and those about x = ∞ by w1, w2; in the region (S0S1) common to the circles S0, S1 of radius 1 whose centres are the points x = 0, x = 1, all the first four are valid, March of the Integral. and there exist equations u1 =Av1 + Bv2, u2 = Cv1 + Dv2 where A, B, C, D are constants; in the region (S1S) lying inside the circle S1 and outside the circle S0, those that are valid are v1, v2, w1, w2, and there exist equations v1 = Pw1 + Qw2, v2 = Rw1 + Tw2, where P, Q, R, T are constants; thus considering any integral whose expression within the circle S0 is au1 + bu2, where a, b are constants, the same integral will be represented within the circle S1 by (aA + bC)v1 + (aB + bD)v2, and outside these circles will be represented by

[aA + bC)P + (aB + bD)R]w1 + [(aA + bC)Q + (aB + bD)T]w2.

A single-valued branch of such integral can be obtained by making a barrier in the plane joining ∞ to 0 and 1 to ∞; for instance, by excluding the consideration of real negative values of x and of real positive values greater than 1, and defining the phase of x and x − 1 for real values between 0 and 1 as respectively 0 and π.

We can form the Fuchsian equation of the second order with three arbitrary singular points ξ1, ξ2, ξ3, and no singular point at x = ∞, and with respective indices α1, β1, α2, β2, α3, β3 such that α1 + β1 + α2 + β2 + α3 + β3 = 1. This equation can then be Transformation of the equation into itself. transformed into the hypergeometric equation in 24 ways; for out of ξ1, ξ2, ξ3 we can in six ways choose two, say ξ1, ξ2, which are to be transformed respectively into 0 and 1, by (x − ξ1)/(x − ξ2) = t(t − 1); and then there are four possible transformations of the dependent variable which will reduce one of the indices at t = 0 to zero and one of the indices at t = 1 also to zero, namely, we may reduce either α1 or β1 at t = 0, and simultaneously either α2 or β2 at t = 1. Thus the hypergeometric equation itself can be transformed into itself in 24 ways, and from the expression F(λ, μ, 1 − λ1, x) which satisfies it follow 23 other forms of solution; they involve four series in each of the arguments, x, x − 1, 1/x, 1/(1 − x), (x − 1)/x, x/(x − 1). Five of the 23 solutions agree with the fundamental solutions already described about x = 0, x = 1, x = ∞; and from the principles by which these were obtained it is immediately clear that the 24 forms are, in value, equal in fours.

The quarter periods K, K′ of Jacobi’s theory of elliptic functions, of which K = ∫π/20 (1 − h sin ²θ)−½dθ, and K′ is the same function of 1-h, can easily be proved to be the solutions of a hypergeometric Inversion. Modular functions. equation of which h is the independent variable. When K, K′ are regarded as defined in terms of h by the differential equation, the ratio K′/K is an infinitely many valued function of h. But it is remarkable that Jacobi’s own theory of theta functions leads to an expression for h in terms of K′/K (see Function) in terms of single-valued functions. We may then attempt to investigate, in general, in what cases the independent variable x of a hypergeometric equation is a single-valued function of the ratio s of two independent integrals of the equation. The same inquiry is suggested by the problem of ascertaining in what cases the hypergeometric series F(α, β, γ, x) is the expansion of an algebraic (irrational) function of x. In order to explain the meaning of the question, suppose that the plane of x is divided along the real axis from -∞ to 0 and from 1 to +∞, and, supposing logarithms not to enter about x = 0, choose two quite definite integrals y1, y2 of the equation, say

y1 = F(λ, μ, 1 − λ1, x), y2 = xλ1 F(λ + λ1, μ + λ1, 1 + λ1, x),

with the condition that the phase of x is zero when x is real and between 0 and 1. Then the value of ς = y2/y1 is definite for all values of x in the divided plane, ς being a single-valued monogenic branch of an analytical function existing and without singularities all over this region. If, now, the values of ς that so arise be plotted on to another plane, a value p + iq of σ being represented by a point (p, q) of this ς-plane, and the value of x from which it arose being mentally associated with this point of the σ-plane, these points will fill a connected region therein, with a continuous boundary formed of four portions corresponding to the two sides of the two barriers of the x-plane. The question is then, firstly, whether the same value of s can arise for two different values of x, that is, whether the same point (p, q) of the ς-plane can arise twice, or in other words, whether the region of the ς-plane overlaps itself or not. Supposing this is not so, a second part of the question presents itself. If in the x-plane the barrier joining -∞ to 0 be momentarily removed, and x describe a small circle with centre at x = 0 starting from a point x = −h − ik, where h, k are small, real, and positive and coming back to this point, the original value s at this point will be changed to a value σ, which in the original case did not arise for this value of x, and possibly not at all. If, now, after restoring the barrier the values arising by continuation from σ be similarly plotted on the ς-plane, we shall again obtain a region which, while not overlapping itself, may quite possibly overlap the former region. In that case two values of x would arise for the same value or values of the quotient y2/y1, arising from two different branches of this quotient. We shall understand then, by the condition that x is to be a single-valued function of x, that the region in the ς-plane corresponding to any branch is not to overlap itself, and that no two of the regions corresponding to the different branches are to overlap. Now in describing the circle about x = 0 from x = −h − ik to −h + ik, where h is small and k evanescent,

ς = xλ1 F(λ + λ1, μ + λ1, 1 + λ1, x) / F(λ, μ, 1 − λ1, x)

is changed to σ = ςe2πiλ1. Thus the two portions of boundary of the s-region corresponding to the two sides of the barrier (−∞, 0) meet (at ς = 0 if the real part of λ1 be positive) at an angle 2πL1, where L1 is the absolute value of the real part of λ1; the same is true for the σ-region representing the branch σ. The condition that the s-region shall not overlap itself requires, then, L1 = 1. But, further, we may form an infinite number of branches σ = ςe2πiλ1, σ1 = e2πiλ1, ... in the same way, and the corresponding regions in the plane upon which y2/y1 is represented will have a common point and each have an angle 2πL1; if neither overlaps the preceding, it will happen, if L1 is not zero, that at length one is reached overlapping the first, unless for some positive integer α we have 2παL1 = 2π, in other words L1 = 1/α. If this be so, the branch σα−1 = ςe2πiαλ1 will be represented by a region having the angle at the common point common with the region for the branch ς; but not altogether coinciding with this last region unless λ1 be real, and therefore = ±1/α; then there is only a finite number, α, of branches obtainable in this way by crossing the barrier (−∞, 0). In precisely the same way, if we had begun by taking the quotient

ς′ = (x − 1)λ2 F(λ + λ2, μ + λ2, 1 + λ2, 1 − x) / F(λ, μ, 1 − λ2, 1 − x)

of the two solutions about x = 1, we should have found that x is not a single-valued function of ς′ unless λ2 is the inverse of an integer, or is zero; as ς′ is of the form (Aσ + B)/(Cς + D), A, B, C, D constants, the same is true in our case; equally, by considering the integrals about x = ∞ we find, as a third condition necessary in order that x may be a single-valued function of ς, that λ − μ must be the inverse of an integer or be zero. These three differences of the indices, namely, λ1, λ2, λ − μ, are the quantities which enter in the differential equation satisfied by x as a function of ς, which is easily found to be

x111 + 3x²11 ½(h − h1 − h2)x−1(x − 1)−1 + ½h1x−2 + ½h2(x − 1)−2,
x1³ 2x14

where x1 = dx/dς, &c.; and h1 = 1 − y1², h2 = 1 − λ2², h3 = 1 − (λ − μ)². Into the converse question whether the three conditions are sufficient to ensure (1) that the σ region corresponding to any branch does not overlap itself, (2) that no two such regions overlap, we have no space to enter. The second question clearly requires the inquiry whether the group (that is, the monodromy group) of the differential equation is properly discontinuous. (See Groups, Theory of.)

The foregoing account will give an idea of the nature of the function theories of differential equations; it appears essential not to exclude some explanation of a theory intimately related both to such theories and to transformation theories, which is a generalization of Galois’s theory of algebraic equations. We deal only with the application to homogeneous linear differential equations.

In general a function of variables x1, x2 ... is said to be rational when it can be formed from them and the integers 1, 2, 3, ... by a finite number of additions, subtractions, multiplications and divisions. We generalize this definition. Assume that Rationality group of a linear equation. we have assigned a fundamental series of quantities and functions of x, in which x itself is included, such that all quantities formed by a finite number of additions, subtractions, multiplications, divisions and differentiations in regard to x, of the terms of this series, are themselves members of this series. Then the quantities of this series, and only these, are called rational. By a rational function of quantities p, q, r, ... is meant a function formed from them and any of the fundamental rational quantities by a finite number of the five fundamental operations. Thus it is a function which would be called, simply, rational if the fundamental series were widened by the addition to it of the quantities p, q, r, ... and those derivable from them by the five fundamental operations. A rational ordinary differential equation, with x as independent and y as dependent variable, is then one which equates to zero a rational function of y, the order k of the differential equation being that of the highest differential coefficient y(k) which enters; only such equations are here discussed. Such an equation P = 0 is called irreducible when, firstly, being arranged as an integral polynomial in y(k), this polynomial Irreducibility of a rational equation. is not the product of other polynomials in y(k) also of rational form; and, secondly, the equation has no solution satisfying also a rational equation of lower order. From this it follows that if an irreducible equation P = 0 have one solution satisfying another rational equation Q = 0 of the same or higher order, then all the solutions of P = 0 also satisfy Q = 0. For from the equation P = 0 we can by differentiation express y(k+1), y(k+2), ... in terms of x, y, y(1), ... , y(k), and so put the function Q rationally in terms of these quantities only. It is sufficient, then, to prove the result when the equation Q = 0 is of the same order as P = 0. Let both the equations be arranged as integral polynomials in y(k); their algebraic eliminant in regard to y(k) must then vanish identically, for they are known to have one common solution not satisfying an equation of lower order; thus the equation P = 0 involves Q = 0 for all solutions of P = 0.

Now let y(n) = a1y(n−1) + ... + any be a given rational homogeneous linear differential equation; let y1, ... yn be n particular functions of x, unconnected by any equation with constant coefficients of the form c1y1 + ... + cnyn = 0, all satisfying The variant function for a linear equation. the differential equation; let η1, ... ηn be linear functions of y1, ... yn, say ηi = Ai1y1 + ... + Ainyn, where the constant coefficients Aij have a non-vanishing determinant; write (η) = A(y), these being the equations of a general linear homogeneous group whose transformations may be denoted by A, B, .... We desire to form a rational function φ(η), or say φ(A(y)), of η1, ... η, in which the η² constants Aij shall all be essential, and not reduce effectively to a fewer number, as they would, for instance, if the y1, ... yn were connected by a linear equation with constant coefficients. Such a function is in fact given, if the solutions y1, ... yn be developable in positive integral powers about x = a, by φ(η) = η1 + (x − a)n η2 + ... + (x − a)(n−1)n ηn. Such a function, V, we call a variant.

Then differentiating V in regard to x, and replacing ηi(n) by its value a1η(n−1) + ... + anη, we can arrange dV/dx, and similarly each of d²/dx² ... dNV/dxN, where N = n², as a linear function of the N quantities η1, ... ηn, ... η1(n−1), ... ηn(n−1), and The resolvent eqution. thence by elimination obtain a linear differential equation for V of order N with rational coefficients. This we denote by F = 0. Further, each of η1 ... ηn is expressible as a linear function of V, dV/dx, ... dN−1V / dxN−1, with rational coefficients not involving any of the n² coefficients Aij, since otherwise V would satisfy a linear equation of order less than N, which is impossible, as it involves (linearly) the n² arbitrary coefficients Aij, which would not enter into the coefficients of the supposed equation. In particular, y1 ,.. yn are expressible rationally as linear functions of ω, dω/dx, ... dN−1ω / dxN−1, where ω is the particular function φ(y). Any solution W of the equation F = 0 is derivable from functions ζ1, ... ζn, which are linear functions of y1, ... yn, just as V was derived from η1, ... ηn; but it does not follow that these functions ζi, ... ζn are obtained from y1, ... yn by a transformation of the linear group A, B, ... ; for it may happen that the determinant d(ζ1, ... ζn) / (dy1, ... yn) is zero. In that case ζ1, ... ζn may be called a singular set, and W a singular solution; it satisfies an equation of lower than the N-th order. But every solution V, W, ordinary or singular, of the equation F = 0, is expressible rationally in terms of ω, dω / dx, ... dN−1ω / dxN−1; we shall write, simply, V = r(ω). Consider now the rational irreducible equation of lowest order, not necessarily a linear equation, which is satisfied by ω; as y1, ... yn are particular functions, it may quite well be of order less than N; we call it the resolvent equation, suppose it of order p, and denote it by γ(v). Upon it the whole theory turns. In the first place, as γ(v) = 0 is satisfied by the solution ω of F = 0, all the solutions of γ(v) are solutions F = 0, and are therefore rationally expressible by ω; any one may then be denoted by r(ω). If this solution of F = 0 be not singular, it corresponds to a transformation A of the linear group (A, B, ...), effected upon y1, ... yn. The coefficients Aij of this transformation follow from the expressions before mentioned for η1 ... ηn in terms of V, dV/dx, d²V/dx², ... by substituting V = r(ω); thus they depend on the p arbitrary parameters which enter into the general expression for the integral of the equation γ(v) = 0. Without going into further details, it is then clear enough that the resolvent equation, being irreducible and such that any solution is expressible rationally, with p parameters, in terms of the solution ω, enables us to define a linear homogeneous group of transformations of y1 ... yn depending on p parameters; and every operation of this (continuous) group corresponds to a rational transformation of the solution of the resolvent equation. This is the group called the rationality group, or the group of transformations of the original homogeneous linear differential equation.

The group must not be confounded with a subgroup of itself, the monodromy group of the equation, often called simply the group of the equation, which is a set of transformations, not depending on arbitrary variable parameters, arising for one particular fundamental set of solutions of the linear equation (see Groups, Theory of).

The importance of the rationality group consists in three propositions. (1) Any rational function of y1, ... yn which is unaltered in value by the transformations of the group can be written in rational form. (2) If any rational function be changed The fundamental theorem in regard to the rationality group. in form, becoming a rational function of y1, ... yn, a transformation of the group applied to its new form will leave its value unaltered. (3) Any homogeneous linear transformation leaving unaltered the value of every rational function of y1, ... yn which has a rational value, belongs to the group. It follows from these that any group of linear homogeneous transformations having the properties (1) (2) is identical with the group in question. It is clear that with these properties the group must be of the greatest importance in attempting to discover what functions of x must be regarded as rational in order that the values of y1 ... yn may be expressed. And this is the problem of solving the equation from another point of view.

Literature.—(α) Formal or Transformation Theories for Equations of the First Order:—E. Goursat, Leçons sur l’intégration des équations aux dérivées partielles du premier ordre (Paris, 1891); E. v. Weber, Vorlesungen über das Pfaff’sche Problem und die Theorie der partiellen Differentialgleichungen erster Ordnung (Leipzig, 1900); S. Lie und G. Scheffers, Geometrie der Berührungstransformationen, Bd. i. (Leipzig, 1896); Forsyth, Theory of Differential Equations, Part i., Exact Equations and Pfaff’s Problem (Cambridge, 1890); S. Lie, “Allgemeine Untersuchungen über Differentialgleichungen, die eine continuirliche endliche Gruppe gestatten” (Memoir), Mathem. Annal.xxv. (1885), pp. 71-151; S. Lie und G. Scheffers, Vorlesungen über Differentialgleichungen mit bekannten infinitesimalen Transformationen (Leipzig, 1891). A very full bibliography is given in the book of E. v. Weber referred to; those here named are perhaps sufficiently representative of modern works. Of classical works may be named: Jacobi, Vorlesungen über Dynamik (von A. Clebsch, Berlin, 1866); Werke, Supplementband; G Monge, Application de l’analyse à la géométrie (par M. Liouville, Paris, 1850); J. L. Lagrange, Leçons sur le calcul des fonctions (Paris, 1806), and Théorie des fonctions analytiques (Paris, Prairial, an V); G. Boole, A Treatise on Differential Equations (London, 1859); and Supplementary Volume (London, 1865); Darboux, Leçons sur la théorie générale des surfaces, tt. i.-iv. (Paris, 1887-1896); S. Lie, Théorie der transformationsgruppen ii. (on Contact Transformations) (Leipzig, 1890).

(β) Quantitative or Function Theories for Linear Equations:—C. Jordan, Cours d’analyse, t. iii. (Paris, 1896); E. Picard, Traité d’analyse, tt. ii. and iii. (Paris, 1893, 1896); Fuchs, Various Memoirs, beginning with that in Crelle’s Journal, Bd. lxvi. p. 121; Riemann, Werke, 2r Aufl. (1892); Schlesinger, Handbuch der Theorie der linearen Differentialgleichungen, Bde. i.-ii. (Leipzig, 1895-1898); Heffter, Einleitung in die Theorie der linearen Differentialgleichungen mit einer unabhängigen Variablen (Leipzig, 1894); Klein, Vorlesungen über lineare Differentialgleichungen der zweiten Ordnung (Autographed, Göttingen, 1894); and Vorlesungen über die hypergeometrische Function (Autographed, Göttingen, 1894); Forsyth, Theory of Differential Equations, Linear Equations.

(γ) Rationality Group (of Linear Differential Equations):—Picard, Traité d’Analyse, as above, t. iii.; Vessiot, Annales de l’École Normale, série III. t. ix. p. 199 (Memoir); S. Lie, Transformationsgruppen, as above, iii. A connected account is given in Schlesinger, as above, Bd. ii., erstes Theil.

(δ) Function Theories of Non-Linear Ordinary Equations:—Painlevé, Leçons sur la théorie analytique des équations différentielles (Paris, 1897, Autographed); Forsyth, Theory of Differential Equations, Part ii., Ordinary Equations not Linear (two volumes, ii. and iii.) (Cambridge, 1900); Königsberger, Lehrbuch der Theorie der Differentialgleichungen (Leipzig, 1889); Painlevé, Leçons sur l’intégration des équations differentielles de la mécanique et applications (Paris, 1895).

(ε) Formal Theories of Partial Equations of the Second and Higher Orders:—E. Goursat, Leçons sur l’intégration des équations aux dérivées partielles du second ordre, tt. i. and ii. (Paris, 1896, 1898); Forsyth, Treatise on Differential Equations (London, 1889); and Phil. Trans. Roy. Soc. (A.), vol. cxci. (1898), pp. 1-86.

(ζ) See also the six extensive articles in the second volume of the German Encyclopaedia of Mathematics.

(H. F. Ba.)



Download as ZWI file | Last modified: 11/17/2022 15:23:32 | 16 views
☰ Source: https://oldpedia.org/article/britannica11/Differential_Equation | License: Public domain in the USA. Project Gutenberg License

ZWI signed:
  Oldpedia ✓[what is this?]