Method of lines

From Scholarpedia - Reading time: 24 min


Method of Lines, Part I: Basic Concepts

The method of lines (MOL) is a general procedure for the solution of time dependent partial differential equations (PDEs). First we discuss the basic concepts, then in Part II, we follow on with an example implementation.

Contents

[edit] Some PDE Basics

Our physical world is most generally described in scientific and engineering terms with respect to three-dimensional space and time which we abbreviate as spacetime. PDEs provide a mathematical description of physical spacetime, and they are therefore among the most widely used forms of mathematics. As a consequence, methods for the solution of PDEs, such as the MOL (Schiesser, 1991; Schiesser and Griffiths, 2009; Griffiths and Schiesser, 2011), are of broad interest in science and engineering.

As a basic illustrative example of a PDE, we consider

\[\tag{1} \frac{\partial u}{\partial t}=D\frac{\partial ^2 u}{\partial x^2} \]


where

  • \(u\) dependent variable (dependent on \(x\) and \(t\))
  • \(t\) independent variable representing time
  • \(x\) independent variable representing one dimension of three-dimensional space
  • \(D\) real positive constant, explained below

Note that eq. (1) has two independent variables, \(x\) and \(t\) which is the reason it is classified as a PDE (any differential equation with more than one independent variable is a PDE). A differential equation with only one independent variable is generally termed an ordinary differential equation (ODE); we will consider ODEs later as part of the MOL.

Eq. (1) is termed the diffusion equation or heat equation. When applied to heat transfer, it is Fourier's second law; the dependent variable \(u\) is temperature and \(D\) is the thermal diffusivity. When eq. (1) is applied to mass diffusion, it is Fick's second law; \(u\) is mass concentration and \(D\) is the coefficient of diffusion or the diffusivity.

\( \frac{\partial u}{\partial t}\) is the partial derivative of \(u\) with respect to \(t\) (\(x\) is held constant when taking this partial derivative, which is why partial is used to describe this derivative). Eq. (1) is first order in \(t\) since the highest order partial derivative in \(t\) is first order; it is second order in \(x\) since the highest order partial derivative in \(x\) is second order. Eq. (1) is linear or first degree since all of the terms are to the first power (note that order and degree can be easily confused).

[edit] Initial and Boundary Conditions

Before we consider a solution to eq. (1), we must specify some auxiliary conditions to complete the statement of the PDE problem. The number of required auxiliary conditions is determined by the highest order derivative in each independent variable. Since eq. (1) is first order in \(t\) and second order in \(x\ ,\) it requires one auxiliary condition in \(t\) and two auxiliary conditions in \(x\ .\) To have a complete well-posed problem, some additional conditions may have to be included; for example, that specify valid ranges for coefficients (Kreiss and Lorenz, 2004). However, this is a more advanced topic and will not be developed further here.

\(t\) is termed an initial value variable and therefore requires one initial condition (IC). It is an initial value variable since it starts at an initial value, \(t_0\ ,\) and moves forward over a finite interval \( t_0 \leq t \leq t_f \) or a semi-infinite interval \( t_0 \leq t \leq \infty \) without any additional conditions being imposed. Typically in a PDE application, the initial value variable is time, as in the case of eq. (1).


\(x\) is termed a boundary value variable and therefore requires two boundary conditions (BCs). It is a boundary value variable since it varies over a finite interval \( x_0 \leq x \leq x_f \ ,\) a semi-infinite interval \( x_0 \leq x \leq \infty \) or a fully infinite interval \( -\infty \leq x \leq \infty \ ,\) and at two different values of \( x \ ,\) conditions are imposed on \( u \) in eq. (1). Typically, the two values of \( x \) correspond to boundaries of a physical system, and hence the name boundary conditions.


As examples of auxiliary conditions for eq. (1) (there are other possibilities),

  • An IC could be

\[\tag{2} u(x,t=0) = u_0 \]

where \( u_0\) is a given function of \( x \) .

  • Two BCs could be

\[\tag{3} u(x=x_0,t) = u_b \]


\[\tag{4} \frac{\partial u(x=x_f,t)}{\partial x} = 0 \]

where \(u_b\) is a given boundary (constant) value of \(u\) for all \( t \). Another common possibility is where the initial condition is given as above and the boundary conditions are \(u_0\left( x=x_0,t\right) = f_0 \left(t \right)\) and \(u\left( x=x_f,t \right) = f_b \left( t \right) \). Discontinuities at the boundaries, produced for example, by differences in initial and boundary conditions at the boundaries, can cause computational difficulties, particularly for hyperbolic problems.


BCs can be of three types:

  1. If the dependent variable is specified, as in BC (3), the BC is termed Dirichlet.
  2. If the derivative of the dependent variable is specified, as in BC (4), the BC is termed Neumann.
  3. If both the dependent variable and its derivative appear in the BC, it is termed a BC of the third type or a Robin BC.

[edit] Types of PDE Solutions

Eqs. (1), (2), (3) and (4) constitute a complete PDE problem and we can now consider what we mean by a solution to this problem. Briefly, the solution of a PDE problem is a function that defines the dependent variable as a function of the independent variables, in this case \(u(x,t)\ .\) In other words, we seek a function that when substituted in the PDE and all of its auxiliary conditions, satisfies simultaneously all of these equations.

The solution can be of two types:

  1. If the solution is an actual mathematical function, it is termed an analytical solution. While analytical solutions are the gold standard for PDE solutions in the sense that they are exact, they are also generally difficult to derive mathematically for all but the simplest PDE problems (in much the same way that solutions to nonlinear algebraic equations generally cannot be derived mathematically except for certain classes of nonlinear equations).
  2. If the solution is in numerical form, e.g., \(u(x,t)\) tabulated numerically as a function of \(x\) and \(t\ ,\) it is termed a numerical solution. Ideally, the numerical solution is simply a numerical evaluation of the analytical solution. But since an analytical solution is generally unavailable for realistic PDE problems in science and engineering, the numerical solution is an approximation to the analytical solution, and our expectation is that it represents the analytical solution with good accuracy. However, numerical solutions can be computed with modern-day computers for very complex problems, and they will generally have good accuracy (even though this cannot be established directly by comparison with the analytical solution since the latter is usually unknown).

The focus of the MOL is the calculation of accurate numerical solutions.

[edit] PDE Subscript Notation

Before we go on to the general classes of PDEs that the MOL can handle, we briefly discuss an alternative notation for PDEs. Instead of writing the partial derivatives as in eq. (1), we adopt a subscript notation that is easier to state and bears a closer resemblance to the associated computer coding. For example, we can write eq. (1) as \[\tag{5} u_t=Du_{xx} \]

where, for example, \(u_t \) is subscript notation for \( \frac{\partial u}{\partial t}\ .\) In other words, a partial derivative is represented as the dependent variable with a subscript that defines the independent variable. For a derivative that is of order \(n\ ,\) the independent variable is repeated \(n\) times, e.g., for eq. (1), \(u_{xx} \) represents \( \frac{\partial ^2 u}{\partial x^2}\ .\)

[edit] A General PDE System

Using the subscript notation, we can now consider some general PDEs. For example, a general PDE first order in \(t\) can be considered \[\tag{6} \overline{u}_t=\overline{f}(\overline{x},t,\overline{u},\overline{u}_{\overline{x}}, \overline{u}_{\overline{xx}},\ldots) \]

where an overbar (overline) denotes a vector. For example, \(\overline{u}\) denotes a vector of \(n\) dependent variables \[ \overline{u}=(u_1,u_2,\ldots ,u_n)^T \] i.e., a system of \(n\) simultaneous PDEs. Similarly, \(\overline{f}\) denotes an \(n\) vector of derivative functions \[ \overline{f}=(f_1,f_2,\ldots ,f_n)^T \] where \(T\) denotes a transpose (here a row vector is transposed to a column vector). Note also that \(\overline{x}\) is a vector of spatial coordinates, so that, for example, in Cartesian coordinates \(\overline{x}=(x,y,z)^T\) while in cylindrical coordinates \(\overline{x}=(r,\theta,z)^T\ .\) Thus, eq. (6) can represent PDEs in one, two and three spatial dimensions.

Since eq. (6) is first order in \(t\ ,\) it requires one initial condition \[\tag{7} \overline{u}(\overline{x},t=0)=\overline{u}_0(\overline{x},\overline{u},\overline{u}_{\overline{x}}, \overline{u}_{\overline{xx}},\ldots) \]

where \(\overline{u}_0\) is an \(n\)-vector of initial condition functions \[\tag{8} \overline{u}_0=(u_{10},u_{20},\ldots ,u_{n0})^T \]

The derivative vector \(\overline{f}\) of eq. (6) includes functions of various spatial derivatives, \((\overline{u},\overline{u}_{\overline{x}},\overline{u}_{\overline{xx}},\ldots)\ ,\) and therefore we cannot state a priori the required number of BCs. For example, if the highest order derivative in \(\overline{x}\) in all of the derivative functions is second order, then we require \(2\times n \) BCs for each of the spatial independent variables, e.g., \(2\times 2\times n\) for a 2D PDE system, \(2\times 3\times n \) BCs for a 3D PDE system.

We state the general BC requirement of eq. (6) as \[\tag{9} \overline{f}_b(\overline{x}_b,\overline{u}, \overline{u}_{\overline{x}},\overline{u}_{\overline{xx}},\ldots,t)=0 \]

where the subscript \(b\) denotes boundary. The vector of boundary condition functions, \(\overline{f}_b\) has a length (number of functions) determined by the highest order derivative in \(\overline{x}\) in each PDE (in eq. (6) ) as discussed previously.

[edit] PDE Geometric Classification

Eqs. (6), (7) and (9) constitute a general PDE system to which the MOL can be applied. Before proceeding to the details of how this might be done, we need to discuss the three basic forms of the PDEs as classified geometrically. This geometric classification can be done rigorously if certain mathematical forms of the functions in eqs. (6), (7) and (9) are assumed. However, we will adopt a somewhat more descriptive (less rigorous but more general) form of these functions for the specification of the three geometric classes.

If the derivative functions in eq. (6) contain only first order derivatives in \(\overline{x}\ ,\) the PDEs are classified as first order hyperbolic. As an example, the equation \[\tag{10} u_t+vu_x=0 \]

is generally called the linear advection equation; in physical applications, \(v\) is a linear or flow velocity. Although eq. (10) is possibly the simplest PDE, this simplicity is deceptive in the sense that it can be very difficult to integrate numerically since it propagates discontinuities, a distinctive feature of first order hyperbolic PDEs.

Eq. (10) is termed a conservation law since it expresses conservation of mass, energy or momentum under the conditions for which it is derived, i.e., the assumptions on which the equation is based. Conservation laws are a bedrock of PDE mathematical models in science and engineering, and an extensive literature pertaining to their solution, both analytical and numerical, has evolved over many years.

An example of a first order hyperbolic system (using the notation \(u_1 \Rightarrow u, u_2 \Rightarrow v\)) is \[\tag{11} u_t=v_x \]

\[\tag{12} v_t=u_x \]

Eqs. (11) and (12) constitute a system of two linear, constant coefficient, first order hyperbolic PDEs.

Differentiation and algebraic substitution can occasionally be used to eliminate some dependent variables in systems of PDEs. For example, if eq. (11) is differentiated with respect to \(t\) and eq. (12) is differentiated with respect to \(x\) \[ u_{tt}=v_{xt} \] \[ v_{tx}=u_{xx} \] we can then eliminate the mixed partial derivative between these two equations (assuming \(v_{xt}\) in the first equation equals \(v_{tx}\) in the second equation) to obtain \[\tag{13} u_{tt}=u_{xx} \]

Eq. (13) is the second order hyperbolic wave equation.

If the derivative functions in eq. (6) contain only second order derivatives in \(\overline{x}\ ,\) the PDEs are classified as parabolic. Eq. (1) is an example of a parabolic PDE.

Finally, if a PDE contains no derivatives in \(t\) (e.g., the LHS of eq. (6) is zero) it is classified as elliptic. As an example, \[\tag{14} u_{xx}+u_{yy}=0 \]

is Laplace's equation where \(x\) and \(y\) are spatial independent variables in Cartesian coordinates. Note that with no derivatives in \(t\ ,\) elliptic PDEs require no ICs, i.e., they are entirely boundary value PDEs.

PDEs with mixed geometric characteristics are possible, and in fact, are quite common in applications. For example, the PDE \[\tag{15} u_t=-u_{x}+u_{xx} \]

is hyperbolic-parabolic. Since it frequently models convection (hyperbolic) through the term \(u_{x}\) and diffusion (parabolic) through the term \(u_{xx}\ ,\) it is generally termed a convection-diffusion equation. If additionally, it includes a function of the dependent variable \(u\) such as \[\tag{16} u_t=-u_{x}+u_{xx}+f(u) \]

then it might be termed a convection-diffusion-reaction equation since \(f(u)\) typically models the rate of a chemical reaction. If the function depends only the independent variables, i.e., \[\tag{17} u_t=-u_{x}+u_{xx}+g(x,t) \]

the equation could be labeled an inhomogeneous PDE.

This discussion clearly indicates that PDE problems come in an infinite variety, depending, for example, on linearity, types of coefficients (constant, variable), coordinate system, geometric classification (hyperbolic, elliptic, parabolic), number of dependent variables (number of simultaneous PDEs), number of independent variables (number of dimensions), type of BCs, smoothness of the IC, etc., so it might seem impossible to formulate numerical procedures with any generality that can address a relatively broad spectrum of PDEs. But in fact, the MOL provides a surprising degree of generality, although the success in applying it to a new PDE problem depends to some extent on the experience and inventiveness of the analyst, i.e., MOL is not a single, straightforward, clearly defined approach to PDE problems, but rather, is a general concept (or philosophy) that requires specification of details for each new PDE problem. We now proceed to illustrate the formulation of a MOL numerical algorithm with the caveat that this will not be a general discussion of MOL as it might be applied to any conceivable PDE problem.

[edit] Elements of the MOL

The basic idea of the MOL is to replace the spatial (boundary value) derivatives in the PDE with algebraic approximations. Once this is done, the spatial derivatives are no longer stated explicitly in terms of the spatial independent variables. Thus, in effect only the initial value variable, typically time in a physical problem, remains. In other words, with only one remaining independent variable, we have a system of ODEs that approximate the original PDE. The challenge, then, is to formulate the approximating system of ODEs. Once this is done, we can apply any integration algorithm for initial value ODEs to compute an approximate numerical solution to the PDE. Thus, one of the salient features of the MOL is the use of existing, and generally well established, numerical methods for ODEs.


To illustrate this procedure, we consider the MOL solution of eq. (10). First we need to replace the spatial derivative \(u_{x}\) with an algebraic approximation. In this case we will use a finite difference (FD) such as \[\tag{18} u_x \approx \frac{u_{i}-u_{i-1}}{\Delta x} \]

where \(i\) is an index designating a position along a grid in \(x\) and \(\Delta x\) is the spacing in \(x\) along the grid, assumed constant for the time being. Thus, for the left end value of \(x\ ,\) \(i=1\ ,\) and for the right end value of \(x\ ,\) \(i=M\ ,\) i.e., the grid in \(x\) has \(M\) points. Then the MOL approximation of eq. (10) is \[\tag{19} \frac{du_i}{dt} = -v\frac{u_{i}-u_{i-1}}{\Delta x}, \qquad 1 \leq i \leq M \]

Note that eq. (19) is written as an ODE since there is now only one independent variable, \(t\ .\) Note also that eq. (19) represents a system of \(M\) ODEs.

This transformation of a PDE, eq. (10), to a system of ODEs, eqs. (19), illustrates the essence of the MOL, namely, the replacement of the spatial derivatives, in this case \(u_x\ ,\) so that the solution of a system of ODEs approximates the solution of the original PDE. Then, to compute the solution of the PDE, we compute a solution to the approximating system of ODEs. But before considering this integration in \(t\ ,\) we have to complete the specification of the PDE problem. Since eq. (10) is first order in \(t\) and first order in \(x\ ,\) it requires one IC and one BC. These will be taken as

\[\tag{20} u(x,t=0) =f(x) \]

\[\tag{21} u(x=0,t) = g(t) \]


Since eqs. (19) constitute \(M\) initial value ODEs, \(M\) initial conditions are required and from eq. (20), these are \[\tag{22} u(x_i,t=0) = f(x_i),\quad 1 \leq i \leq M \]

Also, application of BC (21) gives for grid point \( i=1\) \[\tag{23} u(x_1,t) = g(t), \quad t \ge 0 \]

Eqs. (19), (22) and (23) now constitute the complete MOL approximation of eq. (10) subject to eqs. (20) and (21). The solution of this ODE system gives the \(M\) functions \[\tag{24} u_{1}(t),u_{2}(t), \ldots u_{M-1}(t),u_{M}(t) \]

that is, an approximation to \(u(x,t)\) at the grid points \( i=1,2, \ldots M\ .\)

Before we go on to consider the numerical integration of the approximating ODEs, in this case eqs. (19), we briefly consider further the FD approximation of eq. (18), which can be written as \[\tag{25} u_x \approx \frac{u_{i}-u_{i-1}}{\Delta x}+O(\Delta x) \]

where \(O(\Delta x)\) denotes of order \(\Delta x\ ,\) that is, the truncation error (from a truncated Taylor series) of the approximation of eq. (19) is proportional to \(\Delta x\) (varies linearly with \(\Delta x\)); thus eq. (25) is also termed a first order FD (since \(\Delta x\) is to the first power in the order or truncation error term).

Note that the numerator of eq. (18), \(u_{i}-u_{i-1}\ ,\) is a difference in two values of \(u\ .\) Also, the denominator \(\Delta x\) remains finite (nonzero). Hence the name finite difference (and it is an approximation because of the truncated Taylor series, so a more complete description is first order finite difference approximation). In fact, in the limit \(\Delta x \rightarrow 0\) the approximation of eq. (18) becomes exactly the derivative. However, in a practical computed-based calculation, \(\Delta x\) remains finite, so eq. (18) remains an approximation.


Also, eq. (10) typically describes the flow of a physical quantity such as concentration of a chemical species or temperature, represented by \(u\ ,\) from left to right with respect to \(x\) with velocity \(v\ .\) Then, using the FD approximation of eq. (25) at \(i\) involves \(u_i\) and \(u_{i-1}\ .\) In a flowing system, \(u_{i-1}\) is to the left (in \(x\)) of \(u_i\) or is upstream or upwind of \(u_i\) (to use a nautical analogy). Thus, eq. (25) is termed a first order upwind FD approximation. Generally, for strongly convective systems such as modeled by eq. (10), some form of upwinding is required in the numerical solution of the descriptive PDEs; we will look at this requirement further in the subsequent discussion.

[edit] ODE Integration within the MOL

We now consider briefly the numerical integration of the \(M\) ODEs of eqs. (19). If the derivative \(\frac{du_i}{dt}\) is approximated by a first order FD \[\tag{26} \frac{du_i}{dt} \approx \frac{u_{i}^{n+1}-u_{i}^{n}}{\Delta t}+O(\Delta t) \]

where \(n\) is an index for the variable \( t \) (\( t \) moves forward in steps denoted or indexed by \(n\)), then a FD remains an approximation to the derivative of eq. (19) is \[ \frac{u_i^{n+1}-u_i^n}{\Delta t} = -v\frac{u_{i}^n-u_{i-1}^n}{\Delta x} \] or solving for \(u_i^{n+1}\ ,\) \[\tag{27} u_i^{n+1}=u_i^n-(v\Delta t/\Delta x)(u_{i}^n-u_{i-1}^n), i = 1, 2, \ldots M \]

Eq. (27) has the important characteristic that it gives \(u_i^{n+1}\) explicitly, that is, we can solve for the solution at the advanced point in \(t\ ,\) \(n+1\ ,\) from the solution at the base point \(n\ .\) In other words, explicit numerical integration of eqs. (19) is by the forward FD of eq. (26), and this procedure is generally termed the forward Euler method which is the most basic form of ODE integration.

While the explicit form of eq. (27) is computationally convenient, it has a possible limitation. If the time step \(\Delta t\) is above a critical value, the calculation becomes unstable, which is manifest by successive changes in the dependent variable, \(\Delta u = u_i^{n+1}-u_i^{n}\ ,\) becoming larger and eventually unbounded as the calculation moves forward in \(t\) (for increasing \(n\)). In fact, for the solution of eq. (10) by the method of eq. (27) to remain stable, the dimensionless group \((v\Delta t/\Delta x)\ ,\) which is called the Courant-Friedricks-Lewy or CFL number, must remain below a critical value, in this case, unity. Note that this stability limit places an upper limit on \(\Delta t\) for a given \(v\) and \(\Delta x\ ;\) if one attempts to increase the accuracy of eq. (27) by using a smaller \(\Delta x\) (larger number of grid points in \(x\) by increasing \(M\)), a smaller value of \(\Delta t\) is required to keep the CFL number below its critical value. Thus, there is a conflicting requirement of improving accuracy while maintaining stability.

The way to circumvent the stability limit of the explicit Euler method as implemented via the forward FD of eq. (26) is to use a backward FD for the derivative in \(t\)

\[\tag{28} \frac{du_i}{dt} \approx \frac{u_{i}^{n}-u_{i}^{n-1}}{\Delta t}+O(\Delta t) \]

so that the FD approximation of eqs. (19) becomes \[ \frac{u_i^n-u_i^{n-1}}{\Delta t} = -v\frac{u_{i}^n-u_{i-1}^n}{\Delta x} \] or after rearrangement (with \((v\Delta t/\Delta x)=\alpha\)) \[\tag{29} (1+\alpha)u_i^n+\alpha u_{i-1}^n = u_i^{n-1}, i = 1, 2, \ldots M \]


Note that we cannot now solve eq. (29) explicitly for the solution at the advanced point, \(u_i^n\ ,\) in terms of the solution at the base point \(u_i^{n-1}\ .\) Rather, eq. (29) is implicit in \(u_i^n\) because \(u_{i-1}^n\) is also unknown; that is, we must solve eq. (29) written for each grid point \(i = 1, 2, \dots M\) as a simultaneous system of bidiagonal equations (bidiagonal because each of eqs. (29) has two unknowns so that simultaneous solution of the full set of approximating algebraic equations is required to obtain the complete numerical solution \(u_1^n,u_2^n, \dots u_M^n\)). Thus, the solution of eqs. (29) is termed the implicit Euler method.

We could then naturally ask why use eqs. (29) when eq. (27) is so much easier to use (explicit calculation of the solution at the next step in \(t\) of eq. (27) vs. the implicit calculation of eqs. (29)). The answer is that the implicit calculation of eqs. (29) is often worthwhile because the implicit Euler method has no stability limit (is unconditionally stable in comparison with the explicit method with the stability limit stated in terms of the CFL condition). However, there is a price to pay for the improved stability of the implicit Euler method, that is, we must solve a system of simultaneous algebraic equations; eqs. (29) is an example. Furthermore, if the original ODE system approximating the PDE is nonlinear, we have to solve a system of nonlinear algebraic equations (eqs. (29) are linear, so the solution is much easier). The system of nonlinear equations is typically solved by a variant of Newton's method which can become very demanding computationally if the number of ODEs is large (due to the use of a large number of spatial grid points in the MOL approximation of the PDE, especially when we attempt the solution of two dimensional (2D) and three dimensional (3D) PDEs). If you have had some experience with Newton's method, you may appreciate that the Jacobian matrix of the nonlinear algebraic system can become very large and sparse as the number of spatial grid points increases.

Additionally, although there is no limit for \(\Delta t\) with regard to stability for the implicit method, there is a limit with regard to accuracy. In fact, the first order upwind approximation of \(u_x\) in eq. (10), eq. (25), and the first order approximation of \(u_t\) in eq. (10), eq. (26) or (28), taken together limit the accuracy of the resulting FD approximation of eq (10). One way around this accuracy limitation is to use higher order FD approximations for the derivatives in eq. (10).

For example, if we consider the second order approximation for \(u_x\) at \(i\) \[\tag{30} u_x \approx \frac{u_{i+1}-u_{i-1}}{2\Delta x}+O(\Delta x^2) \]

substitution in eq. (10) gives the MOL approximation of eq. (10) \[\tag{31} \frac{du_i}{dt} = -v\frac{u_{i+1}-u_{i-1}}{2\Delta x},\; 1 \leq i \leq M \]

We could then reason that if the integration in \(t\) is performed by the explicit Euler method, i.e., we use the approximation of eq. (26) for \(u_t = \frac{du_i}{dt}\ ,\) the resulting numerical solution should be more accurate than the solution from eq. (27). In fact, the MOL approximation based on this idea \[\tag{32} u_i^{n+1}=u_i^n-\frac{v\Delta t}{2\Delta x}(u_{i+1}^n-u_{i-1}^n), i = 1, 2, \ldots M \]

is unconditionally unstable; this conclusion can be demonstrated by a von Neumann stability analysis that we will not cover here. This surprising result demonstrates that replacing the derivatives in PDEs with higher order approximations does not necessarily guarantee more accurate solutions, or even stable solutions.

[edit] Numerical Diffusion and Oscillation

Even if the implicit Euler method is used for the integration in \(t\) of eqs. (31) to achieve stability (or a more sophisticated explicit integrator in \(t\) is used that automatically adjusts \(\Delta t\) to achieve a prescribed accuracy), we would find the solution oscillates unrealistically. This numerical distortion is one of two generally observed forms of numerical error. The other numerical distortion is diffusion which would be manifest in the solution from eq. (27). Briefly, the solution would exhibit excessive smoothing or rounding at points in \(x\) where the solution changes rapidly.

It should be noted that numerical diffusion produced by a first order approximation (e.g., of \(u_x\) in eq. (9) )is to be expected due to severe truncation of the Taylor series (beyond the first or linear term), and that the production of numerical oscillations by higher order approximations is predicted by the Godunov order barrier theorem (Wesseling, 2001). To briefly explain, the order barrier is first order and any linear approximation, including FDs, above first order will not be monotonicity preserving (i.e. will introduce oscillations).

Eq. (10) is an example of a difficult Riemann problem (Wesseling, 2001) if IC eq. (20) is discontinuous; for example, \(u(x,t=0) = h(x)\) where \(h(x)\) is the Heaviside unit step function. The (exact) analytical solution is the initial condition function \(f(x)\) of eq. (20) moving left to right with velocity \(v\) (from eq. (10) and without distortion, i.e., \(u(x,t) = f(x-vt)\ ;\) however, the numerical solution will oscillate if \(u_{x}\) in eq. (10) is replaced with a linear approximation of second or higher order.

We should also mention one point of terminology for FD approximations. The RHS of eq. (30) is an example of a centered approximation since the two points at \(i+1\) and \(i-1\) are centered around the point \(i\ .\) Eq. (25) is an example of a noncentered, one-sided or upwind approximation since the points \(i\) and \(i-1\) are not centered with respect to \(i\ .\) Another possibility would be to use the points \(i\) and \(i+1\) in which case the approximation of \(u_x\) would be downwind (for \(v > 0 \)). Although this might seem like a logical alternative to eq. (17) for approximating eq. (9) at point \(i\ ,\) the resulting MOL system of ODEs is actually unstable. Physically, for a convective system modeled by eq. (9), we would be using information that is downwind of the point of interest (point \(i\)) and thus unavailable in a system flowing with positive velocity \(v\ .\) If \(v\) is negative, then using points \(i\) and \(i+1\) would be upwinding (and thus stable). This illustrates the concept that the direction of flow can be a key consideration in forming the FD approximation of a (convective or hyperbolic) PDE.

Finally, to conclude the discussion of first order hyperbolic PDEs such as eq. (10), since the Godunov theorem indicates that FD approximations above first order will produce numerical oscillations in the solution, the question remains if there are approximations above first order that are nonoscillatory. To answer this question we note first that the Godunov theorem applies to linear approximations; for example, eq. (30) is a linear approximation since \(u\) on the RHS is to the first power. If, however, we consider nonlinear approximations for \(u_x\ ,\) we can in fact develop approximations that are nonoscillatory. The details of such nonlinear approximations are beyond the scope of this discussion, so we will merely mention that they are termed high resolution methods which seek a total variation diminishing (TVD) solution. Such methods, which include flux limiter (Wesseling, 2001) and weighted essentially nonoscillatory (WENO) (Shu, 1998) methods, seek to avoid non-real oscillations when shocks or discontinuities occur in the solution.

So far we have considered only the MOL solution of first order PDEs, e.g., eq. (10). We conclude this discussion of the MOL by considering a second order PDE, the parabolic eq. (1). To begin, we need an approximation for the second derivative \(u_{xx}\ .\) A commonly used second order, central approximation is (again, derived from the Taylor series, so the term \(O(\Delta x^2)\) represents the truncation error) \[\tag{33} u_{xx} \approx \frac{u_{i+1}-2u_{i}+u_{i-1}}{\Delta x^2}+O(\Delta x^2) \]

Substituting eq. (33) into eq. (1) gives a system of approximating ODEs \[\tag{34} \frac{du_i}{dt} = D\frac{u_{i+1}-2u_{i}+u_{i-1}}{\Delta x^2}, i=1,2, \ldots M \]

Eqs. (34) are then integrated subject to IC (2) and BCs (3) and (4). This integration in \(t\) can be by the explicit Euler method, the implicit Euler method, or any other higher order integrator for initial value ODEs. Generally stability is not as much of a concern as with the previous hyperbolic PDEs (a characteristic of parabolic PDEs which tend to smooth solutions rather than hyperbolic PDEs which tend to propagate nonsmooth conditions). However, stability constraints do exist for explicit methods. For example, for the explicit Euler method with a step \(\Delta t\) in \(t\ ,\) the stability constraint is \(D \Delta t/\Delta x^2 < constant\) (so that as \(\Delta x\) is reduced to achieve better spatial accuracy in \(x\ ,\) \(\Delta t\) must also be reduced to maintain stability).

Before proceeding with the integration of eqs. (34), we must include BCs (3) and (4). The Dirichlet BC at \(x=x_0\ ,\) eq. (3), is merely \[\tag{35} u_{1}=u_b \]

and therefore the ODE of eqs. (34) for \(i=1\) is not required and the ODE for \(i=2\) becomes \[\tag{36} \frac{du_2}{dt} = D\frac{u_{3}-2u_{2}+u_{b}}{\Delta x^2} \]


[edit] Differential Algebraic Equations

Eq. (35) is algebraic, and therefore in combination with the ODEs of eqs. (34), we have a differential algebraic (DAE) system.

At \(i=M\ ,\) we have eqs. (34) \[\tag{37} \frac{du_M}{dt} = D\frac{u_{M+1}-2u_{M}+u_{M-1}}{\Delta x^2} \]

Note that \(u_{M+1}\) is outside the grid in \(x\ ,\) i.e., \(M+1\) is a fictitious point. But we must assign a value to \(u_{M+1}\) if eq. (37) is to be integrated. Since this requirement occurs at the boundary point \(i=M\ ,\) we obtain this value by approximating BC (4) using the centered FD approximation of eq. (30) \[\tag{:label exists!} u_x \approx \frac{u_{M+1}-u_{M-1}}{2\Delta x}=0 \]

or \[\tag{38} u_{M+1}=u_{M-1} \]

We can add eq. (38) as an algebraic equation to our system of equations, i.e., continue to use the DAE format, or we can substitute \(u_{M+1}\) from eq. (38) into the eq. (37) \[\tag{39} \frac{du_M}{dt} = D\frac{u_{M-1}-2u_{M}+u_{M-1}}{\Delta x^2} \]

and arrive at an ODE system (eqs. (34) for \(i=3,\ldots M-1\ ,\) eq. (36) for \(i=2\) and eq. (39) for \(i=M\)). Both approaches, either an ODE system or a DAE system, have been used in MOL studies. Either way, we now have a complete formulation of the MOL ODE or DAE system, including the BCs at \(i=1,M\) in eqs. (34). The integration of these equations then gives the numerical solution \(u_1(t),u_2(t), \ldots u_M(t)\ .\) The preceding discussion is based on a relatively basic DAE system, but it indicates that integrators designed for DAE systems can play an important role in MOL analysis.

If the implicit Euler method is applied to eqs. (34), we have \[ \frac{u_i^{n+1}-u_i^{n}}{\Delta t} = D\frac{u_{i+1}^{n+1}-2u_{i}^{n+1}+u_{i-1}^{n+1}}{\Delta x^2}, \quad i=1,2, \ldots M \] or (with \( \alpha = D\Delta t/\Delta x^2\)) \[ u_{i+1}^{n+1}-(1/\alpha+2)u_i^{n+1}+u_{i-1}^{n+1} = - (1/\alpha) u_i^{n},\quad i=1,2, \ldots M \] which is a tridiagonal system of algebraic equations (three unknowns in each equation). Since such banded systems (the nonzero elements are banded around the main diagonal) are common in the numerical solution of PDE systems, special algorithms have been developed to take advantage of the banded structure, typically by not storing and using the zero elements outside the band. These special algorithms that take advantage of the structure of the problem equations can result in major savings in computation time. In the case of tridiagonal equations, the special algorithm is generally called Thomas' method. If the coefficient matrix of the algebraic system does not have a well defined structure, such as bidiagonal or tridiagonal, but consists of mostly zeros with a relatively small number of nonzero elements, which is often the case in the numerical solution of PDEs, the coefficient matrix is said to be sparse; special algorithms and associated software for sparse systems have been developed that can result in very substantial savings in the storage and numerical manipulation of sparse matrices.

Generally, when applying the MOL, integration of the approximating ODE/DAEs (e.g., eqs. (26) and (34)) is accomplished by using library routines for initial value ODE/DAEs. In other words, the explicit programming of the ODE/DAE integration (such as the explicit or implicit Euler method) is avoided; rather, an established integrator is used. This has the advantage that: (1) the detailed programming of the integration can be avoided, particularly the linear algebra (solution of simultaneous equations) required by an implicit integrator, so that the MOL analysis is substantially simplified, and (2) library routines (usually written by experts) include features that make these routines especially effective (robust) and efficient such as automatic integration step size adjustment and the use of higher order integration methods (beyond the first order accuracy of the Euler methods); also, generally they have been thoroughly tested. Thus, the use of quality ODE/DAE library routines is usually an essential part of MOL analysis. We therefore list at the end of this article some public domain sources of such library routines.

Since the MOL essentially replaces the problem PDEs with systems of approximating ODEs, the addition of other ODEs is easily accomplished; this might occur, for example, if the BCs of a PDE are stated as ODEs. Thus, a mixed model consisting of systems of PDEs and ODEs is readily accommodated. Further, this idea can easily be extended to systems of ODE/DAE/PDEs. In other words, the MOL is a very flexible procedure for various combinations of algebraic and differential equations (and this flexibility generally is not available with other library software for PDEs).

[edit] Higher Dimensions and Different Coordinate Systems

To conclude this discussion of the MOL solution of PDEs, we cover two additional points. First, we have considered PDEs in only Cartesian coordinates, and in fact, just one Cartesian coordinate, \(x\ .\) But MOL analysis can in principle be carried out in any coordinate system. Thus, eq. (1) can be generalized to \[\tag{40} \frac{\partial u}{\partial t}=D\nabla ^2 u \]

where \( \nabla ^2\) is the coordinate independent Laplacian operator which can then be expressed in terms of a particular coordinate system. For example, in cylindrical coordinates eq. (40) is \[\tag{41} \frac{\partial u}{\partial t}=D\left( \frac{\partial ^{2}u}{\partial r^{2}} +\frac{1}{r}\frac{\partial u}{\partial r}+\frac{1}{r^{2}}\frac{\partial ^{2}u}{\partial \theta ^{2}}+\frac{\partial ^{2}u}{\partial z^{2}}\right) \]

and in spherical coordinates eq. (40) is \[\tag{42} \frac{\partial u}{\partial t}=D\left[ \frac{\partial ^{2}u}{\partial r^{2}}+\frac{2}{r}\frac{\partial u}{\partial r} +\frac{1}{r^{2}}\left( \frac{\partial ^{2}u}{\partial \theta ^{2}}+\frac{ \cos \theta }{\sin \theta }\frac{\partial u}{\partial \theta }\right) + \frac{1}{r^{2}\sin ^{2}\theta }\frac{\partial ^{2}u}{\partial\phi ^{2}}\right] \]

The challenge then in applying the MOL to PDEs such as eqs. (41) and (42) is the algebraic approximation of the RHS (\(\nabla ^2 u\)) using for example, FDs, finite elements or finite volumes; all of these approximations have been used in MOL analysis, as well as Galerkin, least squares, spectral and other methods. A particularly demanding step is regularization of singularities such as at \( r=0\) (note the number of divisions by \( r\) in the RHS of eqs. (41) and (42) and at \(\theta = 0, \pi/2\) (note the divisions by \(sin(\theta)\) in eq. (42)). Thus the application of the MOL typically requires analysis based on the experience and creativity of the analyst (i.e., it is generally not a mechanical procedure from beginning to end).

The complexity of the numerical solution of higher dimensional PDEs in various coordinate systems prompts the question of why a particular coordinate system would be selected over others. The mathematical answer is that the judicious choice of a coordinate system facilitates the implementation of the BCs in the numerical solution.

The answer based on physical considerations is that the coordinate system is selected to reflect the geometry of the problem system. For example, if the physical system has the shape of a cylinder, cylindrical coordinates would be used. This choice then facilitates the implementation of the BC at the exterior surface of the physical system (exterior surface of the cylinder). However, this can also lead to complications such as the \(r=0\) singularities in eq. (41) (due to the variable \(1/r\) and \(1/r^2\) coefficients). The resolution of these complications is generally worth the effort rather than the use of a coordinate system that does not naturally conform to the geometry of the physical system. If the physical system is not shaped in accordance with a particular coordinate system, i.e., has an irregular geometry, then an approximation to the physical geometry is used, generally termed body fitted coordinates.

As a concluding point, we might consider the origin of the name method of lines. If we consider eqs. (34), integration of this ODE system produces the solution \( u_2(t), u_3(t), \ldots u_M(t)\) (note \(u_1(t)=u_b\ ,\) a constant, from BC (3)). We could then plot these functions in an \(x-u(x,t)\) plane as a vertical line at each \(x\) (\(i=2,3, \dots M\)) with the height of each line equal to \(u(x_i,t)\ .\) In other words, the plot of the solution would be a set of vertical parallel lines suggesting the name method of lines (Schiesser, 1991, Schiesser and Griffiths, 2009).

[edit] Sources of ODE/DAE Integrators and MOL Software

One of the very useful aspects of MOL is that it enables tried and tested ODE/DAE numerical routines to be used, many of which are in the public domain. The following sources are a good starting point for these routines. The LSODE and VODE series of ODE/DAE integrators [2s], DASSL [2s] Radau5 and MEBDFDAE [6s] for DAEs and the SUNDIALS library [5s] are widely used in MOL analysis. Some challenging test problems for ODE/DAE routines as well as some efficient codes and an abundance of illustrative test results can be found on [6s]. Software for MOL analysis is available from [7s] and [8s]. Additionally, routines that can be called from MOL codes are available to perform a variety of complementary computations (e.g., functional approximation by interpolation, evaluation of integrals, maximization and minimization in optimization associated with the solution of PDEs) [1s,3s,4s,8s]

[edit] Example Implementation

Part II of this article deals with an example implementation.

[edit] Acknowledgements

The authors would like to thank the anonymous reviewers for their positive and constructive comments.

[edit] References

  • Kreiss, H-O. and J. Lorenz (2004), Initial-Boundary Value Problems and the Navier-Stokes Equations, SIAM, Philadelphia.
  • Schiesser, W. E. and G. W. Griffiths (2009), A Compendium of Partial Differential Equation Models: Method of Lines Analysis with Matlab, Cambridge University Press; see also http://www.pdecomp.net/
  • Shu, C-W (1998), Essentially Non-oscillatory and Weighted Essential Non-oscillatory Schemes for Hyperbolic Conservation Laws. In: Cockburn, B., C. Johnson, C-W. Shu and E. Tadmor (Eds.), Advanced Numerical Approximation of Nonlinear Hyperbolic Equations, Lecture Notes in Mathematics, vol 1697. Springer, pp325-432.
  • Wesseling, P. (2001), Principles of Computational Fluid Dynamics, Springer, Berlin

Internal references

[edit] See also

Differential-Algebraic Equations, Numerical Analysis, Partial Differential Equations, Wave Equation


Licensed under CC BY-SA 3.0 | Source: http://www.scholarpedia.org/article/Method_of_lines
7 views | Status: cached on November 17 2021 02:27:12
↧ Download this article as ZWI file