Lattice field theory is an area of theoretical physics, specifically
quantum field theory, which deals with field theories defined on a spatial
or space-time lattice.
The theoretical description of the fundamental constituents of matter and the interactions between them is based on quantum field theory. The basic ingredients of field theory are fields. They are functions \(\phi\) which associate to each point \(x\) of space-time a quantity \(\phi(x)\ .\)
In the case of classical field theories, \(\phi(x)\) usually is an element of a finite dimensional real or complex manifold, which in many cases is a linear space. Prominent examples are
In contrast, in the operator formulation of quantum field theory the fields are operators acting in a Hilbert space. (More precisely, quantum fields \(\phi(x)\) are operator valued distributions, which means that integrals \(\int f(x) \phi(x) dx\) with suitable test functions \(f(x)\) are operators.)
The physical content of a field theory depends essentially on the Lagrangian \(\mathcal{L}(\phi(x), \partial^{n} \phi(x))\ ,\) which is a function of \(\phi(x)\) and its derivatives. The Lagrangian determines the field equations, which comprise the interactions. If the strength of an interaction is given by a small parameter \(g\ ,\) it is possible to calculate physical quantities approximately to a satisfactory accuracy by means of perturbation theory, which amounts to a power series expansion in \(g\ .\) This is, for example, the case in quantum electrodynamics (QED), where the interaction is proportional to the fine structure constant \(\alpha \approx 1/137\ ,\) and many interesting observables can be obtained as power series in \(\alpha\ .\) There are, however, important cases, where it turned out that perturbation theory is inadequate for the calculation of physical quantities. The most prominent example is the low-energy regime of Quantum Chromodynamics (QCD), the theory of the strong interactions of elementary particles.
Not only Quantum Chromodynamics, but also other components of the Standard Model of elementary particle physics and moreover theories of physics beyond the Standard Model supply us with non-perturbative problems. An important step to answer such questions has been made by K. Wilson in 1974 (Wilson, 1974). He introduced a formulation of Quantum Chromodynamics on a space-time lattice, which allows the application of various non-perturbative techniques. This discretization will be explained in detail below. It leads to mathematically well-defined problems, which are (at least in principle) solvable. It should also be pointed out that the introduction of a space-time lattice can be taken as a starting point for a mathematically clean approach to quantum field theory, so-called constructive quantum field theory.
In modern quantum field theory, the introduction of a space-time lattice is part of an approach different from the operator formalism. This is lattice field theory. Its main ingredients are
Lattice field theory has turned out to be very successful for the non-perturbative calculation of physical quantities. In this Wiki an introduction and overview over the foundations and methods of lattice field theory is given. The main concepts are here illustrated with a scalar field theory.
Contents |
The functional integral formulation of quantum field theory is a generalization of the quantum mechanical path integral. In quantum mechanics of a point particle in one space dimension, the transition amplitude is given by \[ \langle x'|\mathrm{e}^{-\mathrm{i} HT}|x \rangle, \] where \(|x\rangle\) is an (improper) eigenstate of the position operator and \(H\) is the Hamilton operator.
The transition amplitude can be written as a path integral \[ \langle x'|\mathrm{e}^{-\mathrm{i} HT}|x \rangle = \int\!\mathcal{D}x \ \mathrm{e}^{\mathrm{i} S}, \] where the integration is over all possible paths \(x(t)\) from \(x\) to \(x'\) during the time interval \(T\ ,\) see Figure 2, and \[ S = \int_0^T\!dt\,L(x,\dot x) \] is the classical action for such a path.
Formally the path integral measure is written as \[ \mathcal{D}x \equiv \prod_t dx(t) \] up to a normalization factor. For a particle in 3 dimensional space this is generalized to paths \(x_i(t)\ ,\) where \(i\)=1,2,3, and \[ \mathcal{D}x = \prod_t \prod_i dx_i(t) . \]
Perhaps this is the most intuitive picture of the quantum mechanical transition amplitude. It can be written as an integral over contributions from all possible paths from the starting point to the final point. Each path is weighted by the classical action evaluated along this path.
For a detailed and mathematically rigorous account of path integrals the interested reader is referred to the textbook (Glimm and Jaffe, 1987).
The representation of quantum mechanics in terms of path integrals can be translated to field theory. Consider a scalar field \(\varphi(x)\ ,\) where \(x = (\vec x, t)\) labels space-time coordinates, and the time evolution of \(\varphi(\vec x,t)\) is given by \[ \varphi (\vec x,t) = \mathrm{e}^{\mathrm{i} Ht}\,\varphi (\vec x, t=0)\,\mathrm{e}^{-\mathrm{i} Ht}. \] The objects of interest in field theory are vacuum expectation values of (time ordered) products of field operators, the Greens functions: \[ \langle 0|\varphi(x_1)\varphi(x_2)\dots\varphi(x_n)|0 \rangle , \qquad t_1 > t_2 > \dots > t_n. \] Prominent examples are propagators \( \langle 0 |\varphi(x)\varphi(y)|0 \rangle. \) The Greens functions essentially contain all physical information. In particular, S-matrix elements are related to Greens functions, e.g. the 2-particle scattering elements can be obtained from \( \langle 0 | \varphi(x_1) \dots \varphi(x_4) | 0 \rangle. \)
Instead of discussing the functional integral representation for quantum field theory from the beginning, we shall restrict ourselves to translating the quantum mechanical concepts to field theory by means of analogy. To this end the basic variables \(x_i(t)\) are translated into fields \(\phi(\vec x,t)\ .\) The rules for the translation are \[ \begin{align} x_i(t) \quad & \longleftrightarrow \quad \phi(\vec x,t)\\ i \quad & \longleftrightarrow \quad \vec x\\ \prod_{t,i} dx_i(t) \quad & \longleftrightarrow \quad \prod_{t,\vec x} d\phi (\vec x,t) \equiv \mathcal{D}\phi\\ S = \int\!dt\ L \quad & \longleftrightarrow \quad S = \int\!dt\,d^3x\ \mathcal{L}, \end{align} \] where \(S\) is the classical action.
For scalar field theory one might consider the following Lagrangian density\[ \begin{align} \mathcal{L} &= \frac{1}{2}\left( (\dot\phi(x) )^2 - (\nabla \phi(x))^2\right) - \frac{m_0^2}{2}\phi(x)^2 - \frac{g_0}{4!}\phi(x)^4 \\ &= \frac{1}{2}(\partial_{\mu} \phi)(\partial^{\mu} \phi) - \frac{m_0^2}{2}\phi(x)^2 - \frac{g_0}{4!}\phi(x)^4 . \end{align} \]
The mass \(m_0\) and coupling constant \(g_0\) bear a subscript \(0\ ,\) since they are bare, unrenormalized parameters. This theory plays a role in the context of Higgs-Yukawa models, where \(\phi(x)\) is the Higgs field.
In analogy to the quantum mechanical path integral, a representation of the Greens functions in terms of what one calls functional integrals is written down as \[ \langle 0|\varphi(x_1)\varphi(x_2)\dots\varphi(x_n)|0 \rangle = \frac{1}{Z} \int\!\mathcal{D}\phi\ \phi(x_1)\phi(x_2)\dots\phi(x_n) \ \mathrm{e}^{\mathrm{i} S} \] with \( Z = \int\!\mathcal{D}\phi \ \mathrm{e}^{\mathrm{i} S}. \) These expressions involve integrals over all classical field configurations.
As mentioned before, any derivation of functional integrals is not attempted here, but just a motivation of their form by analogy. Furthermore, in the case of quantum mechanics the transition amplitude has been considered, whereas now the formula for Greens functions has been written, which is a bit different.
The formulae for functional integrals give rise to some questions. First of all, how does the projection onto the ground state \(| 0 \rangle\) arise? Secondly, these integrals contain oscillating integrands, due to the imaginary exponents; what about their convergence? Moreover, is there a way to evaluate them numerically?
In the following it will be discussed, how the introduction of imaginary times helps in answering these questions.
Return to quantum mechanics for a moment. Here one can also introduce Greens functions, e.g. \[ G(t_1,t_2) = \langle 0|X(t_1)X(t_2)|0 \rangle,\qquad t_1 > t_2, \] where \(X(t)\) is the position operator in the Heisenberg picture. In the following it will be demonstrated that these Greens functions are related to quantum mechanical amplitudes at imaginary times by analytic continuation. Consider the matrix element \[ \langle x',t'|X(t_1)X(t_2)|x,t \rangle = \langle x'|\mathrm{e}^{-\mathrm{i} H(t'-t_1)} X \mathrm{e}^{-\mathrm{i} H(t_1-t_2)} X \mathrm{e}^{-\mathrm{i} H(t_2-t)}|x \rangle \] for \(t' > t_1 > t_2 > t\ .\) Now choose all times to be purely imaginary \[ t = -\mathrm{i}\tau, \] again ordered, \(\tau' > \tau_1 > \tau_2 > \tau\ .\) This yields the expression \[ \langle x'|\mathrm{e}^{-H(\tau '-\tau_1)} X \mathrm{e}^{-H(\tau_1-\tau_2)} X \mathrm{e}^{-H(\tau_2-\tau)}|x \rangle. \] Inserting a complete set of energy eigenstates, the expansion of the time evolution operator in imaginary times is \[ \mathrm{e}^{-H\tau} = \sum_{n=0}^{\infty} \mathrm{e}^{-E_n\tau} |n \rangle \langle n| = |0 \rangle \langle 0| + \mathrm{e}^{-E_1\tau}|1 \rangle \langle 1| + \dots, \] where the ground state energy has been normalized to \(E_0 = 0\ .\) For large \(\tau\) it reduces to the projector onto the ground state. Consequently, in the limit \(\tau' \rightarrow \infty\) and \(\tau \rightarrow -\infty\) our matrix element becomes \[ \langle x'| 0 \rangle \langle 0 | X \mathrm{e}^{-H(\tau_1-\tau_2)} X | 0 \rangle \langle 0 | x \rangle, \] and similarly \[ \langle x'| \mathrm{e}^{-H(\tau' - \tau)} | x \rangle \longrightarrow \langle x' | 0 \rangle \langle 0 | x \rangle. \] Therefore the Greens function at imaginary times, \[ G_E(\tau_1,\tau_2) = \langle 0| X\mathrm{e}^{-H(\tau_1-\tau_2)} X |0 \rangle, \] can be expressed as \[ G_E(\tau_1,\tau_2) = \lim_{\tau'\rightarrow \infty,\ \tau \rightarrow -\infty} \frac{\langle x'|\mathrm{e}^{-H(\tau '-\tau_1)} X \mathrm{e}^{-H(\tau_1-\tau_2)} X \mathrm{e}^{-H(\tau_2-\tau)}|x \rangle}{\langle x'|\mathrm{e}^{-H(\tau'-\tau)}|x \rangle}. \] Now the denominator as well as the numerator can be represented by path integrals. The difference to the case of real times is that for imaginary times we have to use \[ \langle x | \mathrm{e}^{-H\Delta \tau}| y \rangle \approx \sqrt{\frac{m}{2\pi \Delta \tau}} \ \exp -\Delta \tau \left\{ \frac{m}{2} \left( \frac{x - y}{\Delta \tau} \right)^2 + V(x) \right\}. \] This leads to the path integral representation \[ G_E(\tau_1,\tau_2) = \frac{1}{Z}\int\!\mathcal{D}x\ x(\tau_1) x(\tau_2) \, \mathrm{e}^{-S_E}, \] where \( \quad Z = \int\!\mathcal{D}x\ \mathrm{e}^{-S_E} \ \, \) and \[ S_E = \int\!d\tau \left\{ \frac{m}{2} \left(\frac{dx}{d\tau}\right)^2 + V(x(\tau))\right\}. \] The Greens function at real times, which we were interested in originally, can be obtained from \(G_E\) by means of analytical continuation, \(G(t_1,t_2) = G_E(\mathrm{i} t_1, \mathrm{i} t_2)\ .\)
The analytic continuation has to be done in such a way that all time arguments are rotated simultaneously counter-clockwise in the complex \(t\)-plane. This is the so-called Wick rotation, illustrated in Figure 3.
Now we turn to field theory again. The Green's functions \[ G(x_1,\dots,x_n) = \langle 0 | T \varphi(x_1) \dots \varphi(x_n)| 0 \rangle, \] continued to imaginary times, \(t = -\mathrm{i}\tau\ ,\) are the so-called Schwinger functions \[ G_E( (\vec x_1,\tau_1), \dots, (\vec x_n,\tau_n)) = G( (\vec x_1,-i\tau_1), \dots, (\vec x_n,-i\tau_n)). \] In analogy to the quantum mechanical case their functional integral representation reads \[ G_E(x_1,\dots,x_n) = \frac{1}{Z}\int\!\mathcal{D}\phi\ \phi(x_1)\dots \phi(x_n)\, \mathrm{e}^{-S_E} \] with \( Z = \int\!\mathcal{D}\phi\ \mathrm{e}^{-S_E} \) and \(\tag{1} \begin{align} S_E &= \int\!d^3xd\tau \left\{ \frac{1}{2}\left(\frac{d\phi}{d\tau}\right)^2 + \frac{1}{2}(\nabla \phi)^2 + \frac{m_0^2}{2} \phi^2 + \frac{g_0}{4!} \phi^4 \right\} \\ &= \int\!d^4x \left\{ \frac{1}{2}(\partial_{\mu}\phi)^2 + \frac{m_0^2}{2}\phi^2 + \frac{g_0}{4!}\phi^4 \right\}. \end{align} \)
As can also be seen from the kinetic part contained in \(S_E\ ,\) the metric of Minkowski space \[ - ds^2 = -dt^2 + dx_1^2 + dx_2^2 + dx_3^2 \] has changed into \[ d\tau^2 + dx_1^2 + dx_2^2 + dx_3^2 , \] which is the metric of a Euclidean space. Therefore one speaks of Euclidean Greens functions \(G_E\) and of Euclidean functional integrals. They are taken as starting point for non-perturbative investigations of field theories and for constructive studies.
Whether it is possible to continue a specific field theory analytically from real to imaginary times and vice versa, depends on certain conditions to be satisfied. For a large class of field theories these conditions have been analyzed and formulated by Osterwalder and Schrader, see (Osterwalder and Schrader, 1973, 1975). In particular, a Euclidean field theory must satisfy the so-called reflection positivity in order to correspond to a proper field theory in Minkowski space.
As \(S_E\) is real, the integrals of interest are now real and no unpleasant oscillations occur. Moreover, since \(S_E\) is bounded from below, the factor \(\exp (-S_E)\) in the integrand is bounded. Strongly fluctuating fields have a large Euclidean action \(S_E\) and are thus suppressed by the factor \(\exp (-S_E)\ .\) (Strictly speaking, this statement does not make sense in field theory unless renormalization is taken into account.) This makes Euclidean functional integrals so attractive compared to their Minkowskian counterparts.
One might think that in the Euclidean domain everything is unphysical and there is no possibility to get physical results directly from the Euclidean Greens functions. But this is not the case. For example, the spectrum of the theory can be obtained in the following way. Consider a vacuum expectation value of the form \[ \langle 0| A_1 \mathrm{e}^{-H\tau} A_2 |0 \rangle, \] where the \(A_i\)'s are formed out of the field \(\varphi\ ,\) e.g. \(A = \varphi(\vec x, 0)\) or \(A = \int\!d^3x\ \varphi(\vec x, 0)\ .\) Now, with the familiar insertion of a complete set of energy eigenstates, one has \[ \langle 0| A_1 \mathrm{e}^{-H\tau} A_2 |0 \rangle = \sum_n \langle 0|A_1|n \rangle \mathrm{e}^{-E_n\tau} \langle n|A_2|0 \rangle . \] In case of a continuous spectrum the sum is to be read as an integral. On the other hand, representing the expectation value as a functional integral leads to \[ \frac{1}{Z} \int\!\mathcal{D}\phi\ \mathrm{e}^{-S_E} A_1(\tau)A_2(0) = \sum_n \langle 0|A_1|n \rangle \langle n|A_2|0 \rangle \mathrm{e}^{-E_n\tau}. \] This is similar to the ground state projection at the beginning of this section. For large \(\tau\) the lowest energy eigenstates will dominate the sum and one can thus obtain the low-lying spectrum from the asymptotic behaviour of this expectation value. By choosing \(A_1, A_2\) suitably, e.g. for \[ A \equiv A_1 = A_2 = \int\!d^3x\ \varphi(\vec x,0), \] such that \(\langle 0 | A | 1 \rangle \neq 0\) for a one-particle state \(| 1 \rangle\) with zero momentum \(\vec p = 0\) and mass \(m_1\ ,\) one gets \[ \frac{1}{Z} \int\! \mathcal{D}\phi\ \mathrm{e}^{-S_E} A(\tau) A(0) = |\langle 0|A|1 \rangle|^2 \mathrm{e}^{-m_1\tau} + \dots, \] which means that one can extract the mass of the particle.
From now on we shall remain in Euclidean space and suppress the subscript \(E\ ,\) so that \(S \equiv S_E\) means the Euclidean action.
One central question still remains: does the infinite dimensional integration over all classical field configurations, i.e. \[\tag{2} \mathcal{D}\phi = \prod_x d\phi(x), \]
make sense at all? How is it defined?
In quantum mechanics the path integral representation can be derived as a limit of a discretization in time. As in field theory the fields depend on the four Euclidean coordinates instead of a single time coordinate, we may now introduce a discretized space-time in form of a lattice, for example a hypercubic lattice, specified by \[ x_{\mu} = a n_{\mu}, \qquad n_{\mu} \in \mathbf{Z}, \] see Figure 4.
The quantity \(a\) is called the lattice spacing for obvious reasons. It should be noted that the lattice spacing, being a dimensionful quantity, is not a parameter of the discretized theory, which could e.g. be inserted in a computer program for an evaluation of the path integral. The size of the lattice spacing in physical units is a derived quantity determined by the dynamics. This will be explained in Section "Continuum limit".
The scalar field \[ \phi(x), \qquad x \in \mbox{lattice}, \] is now defined on the lattice points only. Partial derivatives are replaced by finite differences, \[ \partial_{\mu}\phi \longrightarrow \Delta_{\mu}\phi(x) \equiv \frac{1}{a} (\phi(x+a{\hat{\mu}})-\phi(x)), \] and space-time integrals by sums: \[ \int\!d^4x \quad \longrightarrow \sum_x a^4 \ . \]
The action of discretized \(\phi^4\)-theory, Eq.(1), can be written as \[ S = \sum_x a^4 \left\{ \frac{1}{2} \sum_{\mu =1}^4 (\Delta_{\mu}\phi(x))^2 + \frac{m_0^2}{2}\phi(x)^2 + \frac{g_0}{4!}\phi(x)^4 \right\}. \]
In the functional integrals the measure \( \mathcal{D}\phi\ ,\) Eq.(2), involves the lattice points \(x\) only. So a discrete set of variables has to be integrated. If the lattice is taken to be finite, one just has finite dimensional integrals.
Discretization of space-time using lattices has one very important consequence. Due to a non-zero lattice spacing, a cutoff in momentum space arises. The cutoff can be observed by having a look at the Fourier transformed field \[ \tilde{\phi}(p) = \sum_x a^4\ \mathrm{e}^{-\mathrm{i} px}\ \phi(x). \] The Fourier transformed functions are periodic in momentum-space, so that one can identify \[ p_{\mu} \cong p_{\mu}+\frac{2\pi}{a} \] and restrict the momenta to the so-called first Brillouin zone \[ -\frac{\pi}{a} \, < \, p_{\mu}\,\leq \frac{\pi}{a}. \] The inverse Fourier transformation, for example, is given by \[ \phi(x) = \int_{-\pi/a}^{\pi/a} \frac{d^4 p}{(2\pi)^4}\ \mathrm{e}^{\mathrm{i} px} \ \tilde{\phi}(p). \] One recognises an ultraviolet cutoff \[ |p_{\mu}| \leq \frac{\pi}{a}. \] Therefore field theories on a lattice are regularized in a natural way.
In order to begin in a well-defined way one would start with a finite lattice. Let us assume a hypercubic lattice with length \(L_1=L_2=L_3=L\) in every spatial direction and length \(L_4=T\) in Euclidean time, \[ x_{\mu} = an_{\mu},\qquad n_{\mu} = 0,1,2,\dots,L_{\mu}-1, \] with finite volume \(V = L^3T\ .\) In a finite volume one has to specify boundary conditions. A popular choice are periodic boundary conditions \[ \phi(x) = \phi(x+aL_{\mu}\,\hat{\mu}), \] where \(\hat{\mu}\) is the unit vector in the \(\mu\)-direction. They imply that the momenta are also discretized, \[ p_{\mu} = \frac{2\pi}{a}\,\frac{l_{\mu}}{L_{\mu}} \qquad \mbox{with} \ l_{\mu} = 0,1,2,\dots,L_{\mu}-1, \] and therefore momentum-space integration is replaced by finite sums \[ \int\!\frac{d^4p}{(2\pi)^4}\ \longrightarrow \ \frac{1}{a^4 L^3 T}\sum_{l_{\mu}}. \] Now, all functional integrals have turned into regularized and finite expressions.
Of course, one would like to recover physics in a continuous and infinite space-time eventually. The task is therefore to take the infinite volume limit, \[ L,T \longrightarrow \infty, \] which is the easier part in general, and to take the continuum limit, \[ a \longrightarrow 0. \] Constructing the continuum limit of a lattice field theory is usually highly nontrivial and most effort is often spent here.
The formulation of Euclidean quantum field theory on a lattice bears a useful analogy to statistical mechanics. Functional integrals have the form of partition functions and we can set up the following correspondence:
Euclidean field theory | Statistical Mechanics |
---|---|
generating functional
|
partition function
|
action
|
Hamilton function
|
mass \(m\)
|
inverse correlation length \(1 / \xi\)
|
This formal analogy allows to use well established methods of statistical mechanics in field theory and vice versa. Even the terminology of both fields is often identical. To mention some examples, in field theory one employs high-temperature expansions and mean field approximations, and in statistical mechanics one applies the renormalization group.
An alternative to Euclidean lattice field theory, as described before, is Hamiltonian lattice field theory, introduced by Kogut and Susskind (Kogut and Susskind, 1975). In this formulation only three-dimensional space is discretized on a lattice, whereas time remains continuous. Furthermore, time is kept real and is not continued to the Euclidean domain. Hamiltonian lattice field theory allows the application of some analytical methods like strong coupling expansions and perturbation theory. Since it is not suitable for the application of the numerical Monte Carlo method, it doesn't enjoy any more as much attention as in its beginnings, and is not covered in more detail here.
Theories of gauge fields can also be formulated on a space-time lattice. As details are explained in the Wiki on lattice gauge theories, we shall just indicate the basic elements of lattice gauge theory for gauge group SU(N).
The paths connecting nearest neighbour points on the lattice are called links.
With each link \[ b = \langle x+a\hat{\mu},x \rangle \] in lattice direction \(\hat{\mu}\) a link variable \[ U(b) \equiv U(x+a\hat{\mu},x) \equiv U_{x \mu} \in {\rm SU}(N) \] is associated. These group valued variables represent the gauge field. The discretized Lie algebra valued gauge field \(A_{\mu}^b(x)\) can be introduced by \[ U_{x \mu} \equiv \exp\{\mathrm{i} g_0 a A_{\mu}^b(x)T_b\}, \] where the \(T^a\)'s are generators of the gauge group and \(g_0\) is the bare coupling constant.
The smallest closed paths on the lattice are the plaquettes, as shown in Figure 6.
The plaquette variables \[ U(p) = U_{x\mu \nu} \equiv U_{(x+a\hat{\nu}) (-\nu)} U_{(x+a\hat{\mu}+a\hat{\nu})(-\mu)} U_{(x+a\hat{\mu}) \nu} U_{x \mu} \] enter the Wilson action \[\tag{3} S_W = -\sum_p \frac{2}{g_0^2} {\rm Re}({\rm Tr}(U(p))). \]
In a naive continuum limit, where \(a\) goes to zero, one has \[ S_W = \frac{g_0 ^2}{4}\sum_x a^4 F_{\mu \nu}^b F_{\mu \nu}^b + \mathcal{O}(a^5), \] which reproduces the Yang-Mills action.
The integral over all gauge field configurations on the lattice amounts to an integral over all link variables \(U(b)\ .\) So, for the expectation value of any observable \(A\) one writes \[ \langle A \rangle = \frac{1}{Z}\int\!\prod_b dU(b)\ A\ \mathrm{e}^{-S_W}, \] where the integration \(dU(b)\) for a given link \(b\) is to be understood as the invariant integration over the group manifold, normalized to \[ \int\!dU = 1. \]
Classical bosonic fields are just ordinary functions and satisfy \[ [\phi(x),\phi(y)] = 0, \] which can be considered as the limit \(\hbar \rightarrow 0\) of the quantum commutation relations.
Fermi statistics implies that fermionic quantum fields have the well-known equal-time anticommutation relations \[ \{\psi(\vec x,t),\psi(\vec y,t)\} = 0. \] Motivated by this, one might introduce a classical limit in which classical fermionic fields satisfy \[ \{\psi(x),\psi(y)\}\,=0 \] for all \(x,y\ .\) Classical fermionic fields are therefore anticommuting variables, which are also called Grassmann variables.
We would like to point out that the argument above is just a heuristic motivation. More rigorous approaches can be found in the literature.
In general, a complex Grassmann algebra is generated by elements \(\eta_i\) and \(\bar{\eta}_i\ ,\) which obey \[ \begin{align} \{\eta_i,\eta_j\} &= 0 \\ \{\eta_i,\bar{\eta}_j\} &= 0 \\ \{\bar{\eta}_i,\bar{\eta}_j\} &= 0. \end{align} \] An integration of Grassmann variables can be defined by \[ \int\!d\eta_i\ (a+b\eta_i) = b \] for arbitrary complex numbers \(a,b\ .\)
In fermionic field theories one has Grassmann fields, which associate Grassmann variables with every space-time point. For example, a Dirac field has anticommuting variables \(\psi_{\alpha}(x)\) and \(\bar{\psi}_{\alpha}(x)\ ,\) where \(\alpha\)=1,2,3,4 is the Dirac index. The classical Dirac field obeys \[ \{ \psi_{\alpha}(x), \psi_{\beta}(y) \} = 0, \quad \mbox{etc.}\,. \] In order to write down fermionic path integrals as integrals over fermionic and anti-fermionic field configurations, we write \[ \mathcal{D}\psi\, \mathcal{D}\bar{\psi} = \prod_x \prod_{\alpha} d\psi_{\alpha}(x)\, d\bar{\psi}_{\alpha}(x). \] Then any fermionic Greens function is of the form \[ \langle 0|A|0 \rangle = \frac{1}{Z}\int\! \mathcal{D}\psi\, \mathcal{D}\bar{\psi} \ A\ \mathrm{e}^{-S_F}, \] with an action \(S_F\) for the fermions. For a free Dirac field the action is \[ S_F = \int\!d^4x\ \bar{\psi}(x) (\gamma_{\mu}\partial^{\mu}+m)\psi(x). \] In the context of the Standard Model, fermionic actions are always bilinear in the fermionic fields. With the help of the Grassmann integration rules above one can then show that the functional integrals are formally remarkably simple to calculate: \[\tag{4} \int\! \mathcal{D}\psi \mathcal{D}\bar{\psi}\ \mathrm{e}^{-\int\!d^4x\, \bar{\Psi}(x) Q \Psi(x)} = \det{Q}. \]
This is the famous fermion determinant. The main problem remains, of course, namely to evaluate the determinant of the typically huge matrix \(Q\ .\)
In numerical simulations of lattice field theories with fermions the calculation of \(\det{Q}\) turns out to be very tedious. Therefore one often uses the quenched approximation that treats \(Q\) as a constant. In recent years different unquenched investigations of Quantum Chromodynamics have been made and have given estimates for quenching errors.
So far no difficulties for the implementation of fermions on the lattice seem to arise: all one has to do is to discretise the field configurations in the well-known way and to calculate the Greens functions with some of the methods of the last section. There is a problem, however. To see this, consider the propagator of a fermion with mass \(m\) as an example. The fermionic lattice action is then given by \[ S_F = \frac{1}{2} \sum_x \sum_{\mu} \bar{\psi}(x) (\gamma_{\mu} \Delta_{\mu} + m)\psi(x) + h.c. \] and the resulting propagator is \[ \tilde{\Delta}(k) = \frac{-i\sum_{\mu}\gamma_{\mu}\sin{k_{\mu}}+m} {\sum_{\mu}\sin{k_{\mu}}^2+m^2}. \] The propagator has got a pole for small \(k_{\mu}\) representing the physical particle, but there are additional poles near \(k_{\mu} = \pm \pi\) due to the periodicity of the denominator. So \(S_F\) really describes 16 instead of 1 particle. This problem - euphemistically called fermion doubling - is a crucial obstacle for all lattice representations of quark fields.
Fermion doubling was already known to Wilson in the early days of lattice Quantum Chromodynamics. He proposed a modified action for the fermions in order to damp out the doubled fields in the continuum limit. Therefore he added another term, the Wilson term, to the naive action. \[ \begin{align} S_F \rightarrow S_F^{(W)} &= S_F - \frac{r}{2}\sum_x \bar{\psi}(x) \Box \psi(x) \\ &= S_F - \frac{r}{2} \sum_{x,\mu} \bar{\psi}(x) \{ \psi(x+\hat{\mu}) + \psi(x-\hat{\mu}) - 2 \psi(x) \}, \end{align} \] where \(0< r \le 1\ .\) Calculating the propagator with this modified action, one finds that the unwanted doubled fermions acquire masses \(\propto 1 / a\ ,\) so that they become infinitely massive in the continuum limit and disappear from the physical spectrum.
Wilson fermions have a serious disadvantage: even at vanishing fermion masses, chiral symmetry is broken explicitly by the Wilson term, and one has problems with calculations for which chiral symmetry is of central importance.
There are alternatives to Wilson's approach. One of them, due to Kogut and Susskind, are so-called staggered fermions. The idea is to distribute the components \(\psi_{\alpha}\) of the Dirac field on different lattice points. It results in a reduction from 16 to 4 fermions. Moreover, for massless fermions a remnant of chiral symmetry in form of a chiral U(1)\(\otimes\)U(1)-symmetry remains.
Even better in view of chiral symmetry and other aspects are formulations for fermions on the lattice, which obey the Ginsparg-Wilson relation. More details can be found in the Wikis on lattice gauge theories and on lattice chiral fermions.
In the previous sections the functional integrals for field theories on the lattice have been defined. But it is another problem to evaluate these high dimensional integrals. A calculation in closed form appears to be impossible in general. In this section some of the methods used to evaluate the functional integrals approximately are considered.
Although lattice field theory offers the possibility to study non-perturbative aspects, perturbation theory is nevertheless a highly valuable tool on the lattice, too. In particular, it can be used to match the results of non-perturbative calculations to perturbative calculations in regions where both methods are applicable.
Perturbation theory amounts to an expansion in powers of the coupling as in the continuum. The lattice provides an intrinsic UV cutoff \(\pi / a\) for all momenta. Apart from that one has to observe that the propagators and vertices are different from the continuum ones, owing to the form of the lattice action. In particular, gluon self interactions of all orders appear and not only as three and four gluon vertices.
The analogies between Euclidean field theory and statistical mechanics have already been pointed out. In statistical mechanics a well-established technique is the high-temperature expansion. For lattice gauge theory, this is an expansion in powers of \[ \beta \sim \frac{1}{g_0^2}, \] which is a small quantity at large bare couplings \(g_0\ .\) Therefore it is the same as a strong coupling expansion. Basically the Boltzmann factor is expanded as \[ \exp{\left( \beta\frac{1}{N} {\rm Re}({\rm Tr}(U(p)))\right)} = 1 + \beta \frac{1}{N} {\rm Re}({\rm Tr}(U(p))) + \dots\ . \] The resulting expansion can be represented diagrammatically, similar to the Feynman diagrams of perturbation theory. The diagram elements, however, are plaquettes \(p\) on the lattice. Every power of \(\beta\) introduces one more plaquette.
In the case of scalar fields, the corresponding method is the hopping parameter expansion, which amounts to an expansion in a parameter \(\kappa\ ,\) which is small for large masses \(m_0\ .\)
Strong coupling and hopping parameter expansions have a finite radius of convergence, in contrast to perturbation theory, which usually is divergent and at best asymptotic.
Other analytical methods are available for approximative evaluations of the functional integrals of lattice gauge theory. Some of them are:
On a finite lattice the calculation of expectation values requires the evaluation of finite dimensional integrals. This immediately suggests the application of numerical methods. The first thing one would naively propose is some simple numerical quadrature. In order to understand that this approach wouldn't be all that helpful, consider a typical lattice as it is considered in recent calculations. With 40 lattice points in every direction we have \(4 \cdot 40^4\) link variables. For gauge group SU(3) this gives 81,920,000 real variables. That should be intractable for conventional quadratures even in the future. Therefore some statistical method is required. Producing lattice gauge configurations just randomly turns out to be extremely inefficient. The crucial idea to handle this problem is the concept of importance sampling: for a given lattice action \(S\) quadrature points \(x_i\) are generated with a probability \[ p(x_i) \sim \exp\{-S(x_i)\}. \] This provides us with a large number of points in the important regions of the integral, improving the accuracy drastically.
In case of lattice gauge theory the quadrature points are configurations \(U^{(i)} = \left\{ U_{x\mu}^{(i)} \right\}\ .\) An expectation value \[ \langle 0 | A | 0 \rangle = \frac{1}{Z} \int\! \mathcal{D}U\ A(U) \ \mathrm{e}^{-S(U)} \] is numerically approximated by the average \[ \bar{A} \equiv \frac{1}{n} \sum_{i=1}^n A( U^{(i)} ). \]
The Monte Carlo method consists in producing a sequence of configurations \(U^{(1)} \rightarrow U^{(2)} \rightarrow U^{(3)} \rightarrow \dots\) with the appropriate probabilities in a statistical way. This is of course done on a computer. An update is a step where a single link variable \(U_{x\mu}\) is changed, whereas a sweep implies that one goes once through the entire lattice, updating all link variables. A commonly used technique for obtaining updates is the Metropolis algorithm.
An important feature of this statistical way of evaluation is the existence of statistical errors. The result of such a calculation is usually presented in the form \[ \langle A \rangle = \bar{A} \pm \sigma_{\bar{A}}, \] where the variance of \(\bar{A}\) decreases with the number \(n\) of configurations as \[ \sigma_{\bar{A}} \sim \frac{1}{n^{1/2}}. \]
The results obtained by means of the Monte Carlo method differ from the desired physical results by different sorts of errors. The most important error sources are
As one is only able to perform calculations at finite lattice spacing, it is an important issue to get the extrapolation process to the continuum limit under control. Since the lattice spacing is the regulator of the theory, it should be useful to apply renormalization group techniques to this problem. Knowing the functional dependence of the bare coupling \(g_0\) on the regulator, in other words solving the renormalization group equation, we should know how to vary the bare coupling of our theory in order to reach a continuum limit. Let us discuss this idea in more detail.
In the continuum limit the lattice spacing \(a\) is supposed to go to zero, while physical masses \(m\) should approach a finite limit. The lattice spacing, however, is not a dimensionless quantity, therefore we have to fix some mass scale \(m\ ,\) e.g. some particle mass, and consider the limit \(a m \rightarrow 0\ .\) The inverse of that, \[ \frac{1}{am} \equiv \xi, \] can be regarded as a correlation length. In the continuum limit \(\xi\) has to go to infinity, which is called a critical point of the theory. In Figure 7 this is illustrated on a two-dimensional lattice with different correlation lengths.
In pure gauge theory, there is a single, dimensionless bare coupling \(g_0\) and \(am\) is clearly a function of \(g_0\ .\) In order to approach the continuum limit, we have to vary \(g_0\) such that \(am \rightarrow 0\ .\) How this is done, is controlled by a renormalization group equation: \[ -a \frac{\partial g_0}{\partial a} = \beta_{LAT}(g_0) = -\beta_0 g_0^3 - \beta_1 g_0^5 + \dots, \] where the first term of the expansion is \[ \beta_0 = \frac{11}{3}\, N\, \frac{1}{16\pi^2}. \] In the perturbative regime of \(g_0\) this equation implies that for decreasing \(am\) the bare coupling \(g_0\) is also decreasing, getting even closer to zero. Hence the continuum limit is associated with the limit \[ g_0 \rightarrow 0 \qquad \mbox{(continuum limit).} \] The solution of the renormalization group equation up to second order in \(g_0\) is \[ a = \Lambda^{-1}_{LAT}\ \exp \left(-\frac{1}{2\beta_0 g_0^2}\right) \ (\beta_0 g_0^2)^{-\frac{\beta_1}{2\beta_0^2}}\ \{1+ \mathcal{O}(g_0^2) \}, \] where the lattice \(\Lambda\)-parameter \(\Lambda_{LAT}\) appears. Solving for \(g_0\) yields \[ g_0^2 = \frac{-1}{\beta_0 \log{a^2\Lambda_{LAT}^2}} + \dots, \] which again reveals the vanishing of \(g_0\) in the continuum limit: \[ g_0^2 \rightarrow 0 \quad \mbox{for} \ a \rightarrow 0. \] We can also observe that \[\tag{5} am = C\, \exp \left( -\frac{1}{2\beta_0 g_0^2}\right) \cdot (\dots), \]
which shows the non-perturbative origin of the mass \(m\ .\)
These considerations, based on the perturbative \(\beta\)-function, motivate the following hypothesis: the continuum limit of a gauge theory on a lattice is to be taken at \(g_0 \rightarrow 0\ .\) Moreover, we expect that it involves massive interacting glueballs and static quark confinement.
The scenario for approaching the continuum limit then is as follows. Calculating masses in lattice units, i.e. numbers \(am\ ,\) and decreasing \(g_0\ ,\) we should reach a region where dimensionless quantities \(am\) follow a behaviour as given by Eq.(5), which is called asymptotic scaling.
For mass ratios it can be shown that the exponential dependence on \(1/g_0^2\) cancels out and it is thought that near the continuum limit \[ \frac{m_1}{m_2} = \mbox{const.} \times (1+ \mathcal{O}(a^p)) \] for some integer \(p\ .\) Such a behaviour, \(m_1 / m_2 \approx\) const., is called scaling. In numerical simulations scaling of various physical quantities has been established for lattice gauge theories, lattice QCD and other models, whereas confirmation of asymptotic scaling is much more demanding.
G. Münster, M. Walzl: Lattice gauge theory - a short primer, Proceedings of the Summer School on Phenomenology of Gauge Interactions, August 13-19, 2000, Zuoz, Switzerland, ed. D. Graudenz, V. Markushin.
R. Gupta: Introduction to Lattice QCD, Lectures given at the LXVIII Les Houches Summer School "Probing the Standard Model of Particle Interactions", July 28-Sept 5, 1997.
R.D. Kenway: Lattice Field Theory.
C. Morningstar: The Monte Carlo method in quantum field theory.
U.-J. Wiese: An Introduction to Lattice Field Theory.
Path_integral, Gauge_theories, Lattice_gauge_theories, Quantum_electrodynamics, Quantum_chromodynamics, Asymptotic_freedom, Lattice_chiral_fermions, Renormalization, Renormalization_group