Geometric algebra

From HandWiki - Reading time: 46 min

Short description: Algebraic structure designed for geometry

In mathematics, a geometric algebra (also known as a real Clifford algebra) is an extension of elementary algebra to work with geometrical objects such as vectors. Geometric algebra is built out of two fundamental operations, addition and the geometric product. Multiplication of vectors results in higher-dimensional objects called multivectors. Compared to other formalisms for manipulating geometric objects, geometric algebra is noteworthy for supporting vector division and addition of objects of different dimensions.

The geometric product was first briefly mentioned by Hermann Grassmann,[1] who was chiefly interested in developing the closely related exterior algebra. In 1878, William Kingdon Clifford greatly expanded on Grassmann's work to form what are now usually called Clifford algebras in his honor (although Clifford himself chose to call them "geometric algebras"). Clifford defined the Clifford algebra and its product as a unification of the Grassmann algebra and Hamilton's quaternion algebra. Adding the dual of the Grassmann exterior product (the "meet") allows the use of the Grassmann–Cayley algebra, and a conformal version of the latter together with a conformal Clifford algebra yields a conformal geometric algebra (CGA) providing a framework for classical geometries.[2] In practice, these and several derived operations allow a correspondence of elements, subspaces and operations of the algebra with geometric interpretations. For several decades, geometric algebras went somewhat ignored, greatly eclipsed by the vector calculus then newly developed to describe electromagnetism. The term "geometric algebra" was repopularized in the 1960s by Hestenes, who advocated its importance to relativistic physics.[3]

The scalars and vectors have their usual interpretation, and make up distinct subspaces of a geometric algebra. Bivectors provide a more natural representation of the pseudovector quantities of vector calculus, such as oriented area, oriented angle of rotation, torque, angular momentum and the electromagnetic field. A trivector can represent an oriented volume, and so on. An element called a blade may be used to represent a subspace of [math]\displaystyle{ V }[/math] and orthogonal projections onto that subspace. Rotations and reflections are represented as elements. Unlike a vector algebra, a geometric algebra naturally accommodates any number of dimensions and any quadratic form such as in relativity.

Examples of geometric algebras applied in physics include the spacetime algebra (and the less common algebra of physical space) and the conformal geometric algebra. Geometric calculus, an extension of GA that incorporates differentiation and integration, can be used to formulate other theories such as complex analysis and differential geometry, e.g. by using the Clifford algebra instead of differential forms. Geometric algebra has been advocated, most notably by David Hestenes[4] and Chris Doran,[5] as the preferred mathematical framework for physics. Proponents claim that it provides compact and intuitive descriptions in many areas including classical and quantum mechanics, electromagnetic theory and relativity.[6] GA has also found use as a computational tool in computer graphics[7] and robotics.

Definition and notation

There are a number of different ways to define a geometric algebra. Hestenes's original approach was axiomatic,[8] "full of geometric significance" and equivalent to the universal Clifford algebra.[9] Given a finite-dimensional vector space [math]\displaystyle{ V }[/math] over a field [math]\displaystyle{ F }[/math] with a symmetric bilinear form (the inner product, e.g. the Euclidean or Lorentzian metric) [math]\displaystyle{ g : V \times V \to F }[/math], the geometric algebra of the quadratic space [math]\displaystyle{ (V, g) }[/math] is the Clifford algebra [math]\displaystyle{ \operatorname{Cl}(V, g) }[/math] which members are called multors or here multivectors. (The term multivector is often used more specificly for elements of exterior algebra.) As usual in this domain, for the remainder of this article, only the real case, [math]\displaystyle{ F = \R }[/math], will be considered. The notation [math]\displaystyle{ \mathcal G(p,q) }[/math] (respectively [math]\displaystyle{ \mathcal G(p,q,r) }[/math]) will be used to denote a geometric algebra for which the bilinear form [math]\displaystyle{ g }[/math] has the signature [math]\displaystyle{ (p,q) }[/math] (respectively [math]\displaystyle{ (p,q,r) }[/math]).

The essential product in the algebra is called the geometric product, and the product in the contained exterior algebra is called the exterior product (frequently called the wedge product and less often the outer product[lower-alpha 1]). It is standard to denote these respectively by juxtaposition (i.e., suppressing any explicit multiplication symbol) and the symbol [math]\displaystyle{ \wedge }[/math]. The above definition of the geometric algebra is abstract, so we summarize the properties of the geometric product by the following set of axioms. The geometric product has the following properties, for multors [math]\displaystyle{ A, B, C\in \mathcal{G}(p,q) }[/math]:

  • [math]\displaystyle{ AB \in \mathcal{G}(p,q) }[/math] (closure)
  • [math]\displaystyle{ 1A = A1 = A }[/math], where [math]\displaystyle{ 1 }[/math] is the identity element (existence of an identity element)
  • [math]\displaystyle{ A(BC)=(AB)C }[/math] (associativity)
  • [math]\displaystyle{ A(B+C)=AB+AC\lt }[/math] and [math]\displaystyle{ (B+C)A=BA+CA }[/math] (distributivity)
  • [math]\displaystyle{ a^2 = g(a,a)1 }[/math], where [math]\displaystyle{ a }[/math] is any element of the subspace [math]\displaystyle{ V }[/math] of the algebra.

The exterior product has the same properties, except that the last property above is replaced by [math]\displaystyle{ a \wedge a = 0 }[/math] for [math]\displaystyle{ a \in V }[/math].

Note that in the last property above, the real number [math]\displaystyle{ g(a,a) }[/math] need not be nonnegative if [math]\displaystyle{ g }[/math] is not positive-definite. An important property of the geometric product is the existence of elements having a multiplicative inverse. For a vector [math]\displaystyle{ a }[/math], if [math]\displaystyle{ a^2 \ne 0 }[/math] then [math]\displaystyle{ a^{-1} }[/math] exists and is equal to [math]\displaystyle{ g(a,a)^{-1}a }[/math]. A nonzero element of the algebra does not necessarily have a multiplicative inverse. For example, if [math]\displaystyle{ u }[/math] is a vector in [math]\displaystyle{ V }[/math] such that [math]\displaystyle{ u^2 = 1 }[/math], the element [math]\displaystyle{ \textstyle\frac{1}{2}(1 + u) }[/math] is both a nontrivial idempotent element and a nonzero zero divisor, and thus has no inverse.[lower-alpha 2]

It is usual to identify [math]\displaystyle{ \R }[/math] and [math]\displaystyle{ V }[/math] with their images under the natural embeddings [math]\displaystyle{ \R \to \mathcal{G}(p,q) }[/math] and [math]\displaystyle{ V \to \mathcal{G}(p,q) }[/math]. In this article, this identification is assumed. Throughout, the terms scalar and vector refer to elements of [math]\displaystyle{ \R }[/math] and [math]\displaystyle{ V }[/math] respectively (and of their images under this embedding).

Geometric product

Given two vectors [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math], if the geometric product [math]\displaystyle{ ab }[/math] is[10] anticommutative; they are perpendicular (top) because [math]\displaystyle{ a \cdot b = 0 }[/math], if it is commutative; they are parallel (bottom) because [math]\displaystyle{ a \wedge b = 0 }[/math].
Orientation defined by an ordered set of vectors.
Reversed orientation corresponds to negating the exterior product.
Geometric interpretation of grade-[math]\displaystyle{ n }[/math] elements in a real exterior algebra for [math]\displaystyle{ n = 0 }[/math] (signed point), [math]\displaystyle{ 1 }[/math] (directed line segment, or vector), [math]\displaystyle{ 2 }[/math] (oriented plane element), [math]\displaystyle{ 3 }[/math] (oriented volume). The exterior product of [math]\displaystyle{ n }[/math] vectors can be visualized as any [math]\displaystyle{ n }[/math]-dimensional shape (e.g. [math]\displaystyle{ n }[/math]-parallelotope, [math]\displaystyle{ n }[/math]-ellipsoid); with magnitude (hypervolume), and orientation defined by that on its [math]\displaystyle{ (n - 1) }[/math]-dimensional boundary and on which side the interior is.[11][12]

For vectors [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math], we may write the geometric product of any two vectors [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] as the sum of a symmetric product and an antisymmetric product:

[math]\displaystyle{ ab = \frac{1}{2} (ab + ba) + \frac{1}{2} (ab - ba) . }[/math]

Thus we can define the inner product[lower-alpha 3] of vectors as

[math]\displaystyle{ a \cdot b := g(a,b), }[/math]

so that the symmetric product can be written as

[math]\displaystyle{ \frac{1}{2}(ab + ba) = \frac{1}{2} \left((a + b)^2 - a^2 - b^2\right) = a \cdot b . }[/math]

Conversely, [math]\displaystyle{ g }[/math] is completely determined by the algebra. The antisymmetric part is the exterior product of the two vectors, the product of the contained exterior algebra:

[math]\displaystyle{ a \wedge b := \frac{1}{2}(ab - ba) = -(b \wedge a) . }[/math]

Then by simple addition:

[math]\displaystyle{ ab=a \cdot b + a \wedge b }[/math] the ungeneralized or vector form of the geometric product.

The inner and exterior products are associated with familiar concepts from standard vector algebra. Geometrically, [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] are parallel if their geometric product is equal to their inner product, whereas [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] are perpendicular if their geometric product is equal to their exterior product. In a geometric algebra for which the square of any nonzero vector is positive, the inner product of two vectors can be identified with the dot product of standard vector algebra. The exterior product of two vectors can be identified with the signed area enclosed by a parallelogram the sides of which are the vectors. The cross product of two vectors in [math]\displaystyle{ 3 }[/math] dimensions with positive-definite quadratic form is closely related to their exterior product.

Most instances of geometric algebras of interest have a nondegenerate quadratic form. If the quadratic form is fully degenerate, the inner product of any two vectors is always zero, and the geometric algebra is then simply an exterior algebra. Unless otherwise stated, this article will treat only nondegenerate geometric algebras.

The exterior product is naturally extended as an associative bilinear binary operator between any two elements of the algebra, satisfying the identities

[math]\displaystyle{ \begin{align} 1 \wedge a_i &= a_i \wedge 1 = a_i \\ a_1 \wedge a_2\wedge\cdots\wedge a_r &= \frac{1}{r!}\sum_{\sigma\in\mathfrak{S}_r} \operatorname{sgn}(\sigma) a_{\sigma(1)}a_{\sigma(2)} \cdots a_{\sigma(r)}, \end{align} }[/math]

where the sum is over all permutations of the indices, with [math]\displaystyle{ \operatorname{sgn}(\sigma) }[/math] the sign of the permutation, and [math]\displaystyle{ a_i }[/math] are vectors (not general elements of the algebra). Since every element of the algebra can be expressed as the sum of products of this form, this defines the exterior product for every pair of elements of the algebra. It follows from the definition that the exterior product forms an alternating algebra.

The equivalent structure equation for Clifford algebra is [13][14]

[math]\displaystyle{ a_1 a_2 a_3 \dots a_n = \sum^{[\frac{n}2]}_{i=0} \sum_{\mu\in{}\mathcal{C}} (-1)^k Pf(a_{\mu_1}\cdot a_{\mu_2},\dots,a_{\mu_{2i-1}} \cdot a_{\mu_{2i}}) a_{\mu_{2i+1}}\land\dots\land a_{\mu_n} }[/math]

where [math]\displaystyle{ Pf(A) }[/math] is the Pfaffian of A and [math]\displaystyle{ \mathcal{C} = \binom{n}{2i} }[/math] provides combinations, [math]\displaystyle{ \mu }[/math], of n indicies divided into 2i and n-2i parts and k is the parity of the combination.

The Pfaffian provides a metric for the exterior algebra and, as pointed out by Claude Chevalley, Clifford algebra reduces to the exterior algebra with a zero quadratic form.[15] The role the Pfaffian plays can be understood from a geometric viewpoint by developing Clifford algebra from simplices.[16] This derivation provides a better connection between Pascal's triangle and simplices because it provides an interpretation of the first column of ones.

Blades, grades, and canonical basis

A multivector that is the exterior product of [math]\displaystyle{ r }[/math] linearly independent vectors is called a blade, and is said to be of grade [math]\displaystyle{ r }[/math].[lower-alpha 5] A multivector that is the sum of blades of grade [math]\displaystyle{ r }[/math] is called a (homogeneous) multivector of grade [math]\displaystyle{ r }[/math]. From the axioms, with closure, every multivector of the geometric algebra is a sum of blades.

Consider a set of [math]\displaystyle{ r }[/math] linearly independent vectors [math]\displaystyle{ \{a_1,\ldots,a_r\} }[/math] spanning an [math]\displaystyle{ r }[/math]-dimensional subspace of the vector space. With these, we can define a real symmetric matrix (in the same way as a Gramian matrix)

[math]\displaystyle{ [\mathbf{A}]_{ij} = a_i \cdot a_j }[/math]

By the spectral theorem, [math]\displaystyle{ \mathbf{A} }[/math] can be diagonalized to diagonal matrix [math]\displaystyle{ \mathbf{D} }[/math] by an orthogonal matrix [math]\displaystyle{ \mathbf{O} }[/math] via

[math]\displaystyle{ \sum_{k,l}[\mathbf{O}]_{ik}[\mathbf{A}]_{kl}[\mathbf{O}^{\mathrm{T}}]_{lj}=\sum_{k,l}[\mathbf{O}]_{ik}[\mathbf{O}]_{jl}[\mathbf{A}]_{kl}=[\mathbf{D}]_{ij} }[/math]

Define a new set of vectors [math]\displaystyle{ \{e_1, \ldots,e_r\} }[/math], known as orthogonal basis vectors, to be those transformed by the orthogonal matrix:

[math]\displaystyle{ e_i=\sum_j[\mathbf{O}]_{ij}a_j }[/math]

Since orthogonal transformations preserve inner products, it follows that [math]\displaystyle{ e_i\cdot e_j=[\mathbf{D}]_{ij} }[/math] and thus the [math]\displaystyle{ \{e_1, \ldots, e_r\} }[/math] are perpendicular. In other words, the geometric product of two distinct vectors [math]\displaystyle{ e_i \ne e_j }[/math] is completely specified by their exterior product, or more generally

[math]\displaystyle{ \begin{array}{rl} e_1e_2\cdots e_r &= e_1 \wedge e_2 \wedge \cdots \wedge e_r \\ &= \left(\sum_j [\mathbf{O}]_{1j}a_j\right) \wedge \left(\sum_j [\mathbf{O}]_{2j}a_j \right) \wedge \cdots \wedge \left(\sum_j [\mathbf{O}]_{rj}a_j\right) \\ &= (\det \mathbf{O}) a_1 \wedge a_2 \wedge \cdots \wedge a_r \end{array} }[/math]

Therefore, every blade of grade [math]\displaystyle{ r }[/math] can be written as the exterior product of [math]\displaystyle{ r }[/math] vectors. More generally, if a degenerate geometric algebra is allowed, then the orthogonal matrix is replaced by a block matrix that is orthogonal in the nondegenerate block, and the diagonal matrix has zero-valued entries along the degenerate dimensions. If the new vectors of the nondegenerate subspace are normalized according to

[math]\displaystyle{ \hat{e}_i=\frac{1}{\sqrt{|e_i \cdot e_i|}}e_i, }[/math]

then these normalized vectors must square to [math]\displaystyle{ +1 }[/math] or [math]\displaystyle{ -1 }[/math]. By Sylvester's law of inertia, the total number of [math]\displaystyle{ +1 }[/math]s and the total number of [math]\displaystyle{ -1 }[/math]s along the diagonal matrix is invariant. By extension, the total number [math]\displaystyle{ p }[/math] of these vectors that square to [math]\displaystyle{ +1 }[/math] and the total number [math]\displaystyle{ q }[/math] that square to [math]\displaystyle{ -1 }[/math] is invariant. (The total number of basis vectors that square to zero is also invariant, and may be nonzero if the degenerate case is allowed.) We denote this algebra [math]\displaystyle{ \mathcal{G}(p,q) }[/math]. For example, [math]\displaystyle{ \mathcal G(3,0) }[/math] models three-dimensional Euclidean space, [math]\displaystyle{ \mathcal G(1,3) }[/math] relativistic spacetime and [math]\displaystyle{ \mathcal G(4,1) }[/math] a conformal geometric algebra of a three-dimensional space.

The set of all possible products of [math]\displaystyle{ n }[/math] orthogonal basis vectors with indices in increasing order, including [math]\displaystyle{ 1 }[/math] as the empty product, forms a basis for the entire geometric algebra (an analogue of the PBW theorem). For example, the following is a basis for the geometric algebra [math]\displaystyle{ \mathcal{G}(3,0) }[/math]:

[math]\displaystyle{ \{1, e_1, e_2, e_3, e_1e_2, e_2e_3, e_3e_1, e_1e_2e_3\} }[/math]

A basis formed this way is called a canonical basis for the geometric algebra, and any other orthogonal basis for [math]\displaystyle{ V }[/math] will produce another canonical basis. Each canonical basis consists of [math]\displaystyle{ 2^n }[/math] elements. Every multivector of the geometric algebra can be expressed as a linear combination of the canonical basis elements. If the canonical basis elements are [math]\displaystyle{ \{ B_i \mid i \in S \} }[/math] with [math]\displaystyle{ S }[/math] being an index set, then the geometric product of any two multivectors is

[math]\displaystyle{ \left( \sum_i \alpha_i B_i \right) \left( \sum_j \beta_j B_j \right) = \sum_{i,j} \alpha_i\beta_j B_i B_j . }[/math]

The terminology "[math]\displaystyle{ k }[/math]-vector" is often encountered to describe multivectors containing elements of only one grade. In higher dimensional space, some such multivectors are not blades (cannot be factored into the exterior product of [math]\displaystyle{ k }[/math] vectors). By way of example, [math]\displaystyle{ e_1 \wedge e_2 + e_3 \wedge e_4 }[/math] in [math]\displaystyle{ \mathcal{G}(4,0) }[/math] cannot be factored; typically, however, such elements of the algebra do not yield to geometric interpretation as objects, although they may represent geometric quantities such as rotations. Only [math]\displaystyle{ 0 }[/math]-, [math]\displaystyle{ 1 }[/math]-, [math]\displaystyle{ (n-1) }[/math]- and [math]\displaystyle{ n }[/math]-vectors are always blades in [math]\displaystyle{ n }[/math]-space.

Versor

A [math]\displaystyle{ k }[/math]-versor is a multivector that can be expressed as the geometric product of [math]\displaystyle{ k }[/math] invertible vectors.[lower-alpha 6][18] Unit quaternions (originally called versors by Hamilton) may be identified with rotors in 3D space in much the same way as real 2D rotors subsume complex numbers; for the details refer to Dorst.[19]

Some authors use the term "versor product" to refer to the frequently occurring case where an operand is "sandwiched" between operators. The descriptions for rotations and reflections, including their outermorphisms, are examples of such sandwiching. These outermorphisms have a particularly simple algebraic form.[lower-alpha 7] Specifically, a mapping of vectors of the form

[math]\displaystyle{ V \to V : a \mapsto RaR^{-1} }[/math] extends to the outermorphism [math]\displaystyle{ \mathcal{G}(V) \to \mathcal{G}(V) : A \mapsto RAR^{-1}. }[/math]

Since both operators and operand are versors there is potential for alternative examples such as rotating a rotor or reflecting a spinor always provided that some geometrical or physical significance can be attached to such operations.

By the Cartan–Dieudonné theorem we have that every isometry can be given as reflections in hyperplanes and since composed reflections provide rotations then we have that orthogonal transformations are versors.

In group terms, for a real, non-degenerate [math]\displaystyle{ \mathcal G(p,q) }[/math], having identified the group [math]\displaystyle{ \mathcal G^\times }[/math] as the group of all invertible elements of [math]\displaystyle{ \mathcal G }[/math], Lundholm gives a proof that the "versor group" [math]\displaystyle{ \{ v_1 v_2 \cdots v_k \in G : v_i \in V^\times\} }[/math] (the set of invertible versors) is equal to the Lipschitz group [math]\displaystyle{ \Gamma }[/math] (a.k.a. Clifford group, although Lundholm deprecates this usage).[20]

Subgroups of the Lipschitz group

Lundholm defines the [math]\displaystyle{ \operatorname{Pin} }[/math], [math]\displaystyle{ \operatorname{Spin} }[/math], and [math]\displaystyle{ \operatorname{Spin}^+ }[/math] subgroups, generated by unit vectors, and in the case of [math]\displaystyle{ \operatorname{Spin} }[/math] and [math]\displaystyle{ \operatorname{Spin}^+ }[/math], only an even number of such vector factors can be present.[21]

Subgroup Definition GA term
[math]\displaystyle{ \Gamma }[/math] [math]\displaystyle{ \Gamma }[/math] versors
[math]\displaystyle{ \operatorname{Pin} }[/math] [math]\displaystyle{ X \in \Gamma : X\tilde X = \pm 1 }[/math] unit versors
[math]\displaystyle{ \operatorname{Spin} }[/math] [math]\displaystyle{ {\operatorname{Pin}} \cap \mathcal{G}^+ }[/math] even unit versors
[math]\displaystyle{ \operatorname{Spin}^{+} }[/math] [math]\displaystyle{ X \in \operatorname{Spin} : X\tilde X = 1 }[/math] rotors

Spinors are defined as elements of the even subalgebra of a real GA with spinor norm [math]\displaystyle{ 1 }[/math]. Multiple analyses of spinors use GA as a representation.[22]

Grade projection

Using an orthogonal basis, a graded vector space structure can be established. Elements of the geometric algebra that are scalar multiples of [math]\displaystyle{ 1 }[/math] are grade-[math]\displaystyle{ 0 }[/math] blades and are called scalars. Multivectors that are in the span of [math]\displaystyle{ \{e_1,\ldots,e_n\} }[/math] are grade-[math]\displaystyle{ 1 }[/math] blades and are the ordinary vectors. Multivectors in the span of [math]\displaystyle{ \{e_ie_j\mid 1\leq i\lt j\leq n\} }[/math] are grade-[math]\displaystyle{ 2 }[/math] blades and are the bivectors. This terminology continues through to the last grade of [math]\displaystyle{ n }[/math]-vectors. Alternatively, grade-[math]\displaystyle{ n }[/math] blades are called pseudoscalars, grade-[math]\displaystyle{ (n-1) }[/math] blades pseudovectors, etc. Many of the elements of the algebra are not graded by this scheme since they are sums of elements of differing grade. Such elements are said to be of mixed grade. The grading of multivectors is independent of the basis chosen originally.

This is a grading as a vector space, but not as an algebra. Because the product of an [math]\displaystyle{ r }[/math]-blade and an [math]\displaystyle{ s }[/math]-blade is contained in the span of [math]\displaystyle{ 0 }[/math] through [math]\displaystyle{ r+s }[/math]-blades, the geometric algebra is a filtered algebra.

A multivector [math]\displaystyle{ A }[/math] may be decomposed with the grade-projection operator [math]\displaystyle{ \langle A \rangle _r }[/math], which outputs the grade-[math]\displaystyle{ r }[/math] portion of [math]\displaystyle{ A }[/math]. As a result:

[math]\displaystyle{ A = \sum_{r=0}^{n} \langle A \rangle _r }[/math]

As an example, the geometric product of two vectors [math]\displaystyle{ a b = a \cdot b + a \wedge b = \langle a b \rangle_0 + \langle a b \rangle_2 }[/math] since [math]\displaystyle{ \langle a b \rangle_0=a\cdot b }[/math] and [math]\displaystyle{ \langle a b \rangle_2 = a\wedge b }[/math] and [math]\displaystyle{ \langle a b \rangle_i=0 }[/math], for [math]\displaystyle{ i }[/math] other than [math]\displaystyle{ 0 }[/math] and [math]\displaystyle{ 2 }[/math].

The decomposition of a multivector [math]\displaystyle{ A }[/math] may also be split into those components that are even and those that are odd:

[math]\displaystyle{ A^{+} = \langle A \rangle _0 + \langle A \rangle _2 + \langle A \rangle _4 + \cdots }[/math]
[math]\displaystyle{ A^{-} = \langle A \rangle _1 + \langle A \rangle _3 + \langle A \rangle _5 + \cdots }[/math]

This is the result of forgetting structure from a [math]\displaystyle{ \mathrm{Z} }[/math]-graded vector space to [math]\displaystyle{ \mathrm{Z}_2 }[/math]-graded vector space. The geometric product respects this coarser grading. Thus in addition to being a [math]\displaystyle{ \mathrm{Z}_2 }[/math]-graded vector space, the geometric algebra is a [math]\displaystyle{ \mathrm{Z}_2 }[/math]-graded algebra or superalgebra.

Restricting to the even part, the product of two even elements is also even. This means that the even multivectors defines an even subalgebra. The even subalgebra of an [math]\displaystyle{ n }[/math]-dimensional geometric algebra is isomorphic (without preserving either filtration or grading) to a full geometric algebra of [math]\displaystyle{ (n-1) }[/math] dimensions. Examples include [math]\displaystyle{ \mathcal G^{+}(2,0) \cong \mathcal{G}(0,1) }[/math] and [math]\displaystyle{ \mathcal{G}^{+}(1,3) \cong \mathcal G(3,0) }[/math].

Representation of subspaces

Geometric algebra represents subspaces of [math]\displaystyle{ V }[/math] as blades, and so they coexist in the same algebra with vectors from [math]\displaystyle{ V }[/math]. A [math]\displaystyle{ k }[/math]-dimensional subspace [math]\displaystyle{ W }[/math] of [math]\displaystyle{ V }[/math] is represented by taking an orthogonal basis [math]\displaystyle{ \{b_1,b_2,\ldots, b_k\} }[/math] and using the geometric product to form the blade [math]\displaystyle{ D = b_1b_2\cdots b_k }[/math]. There are multiple blades representing [math]\displaystyle{ W }[/math]; all those representing [math]\displaystyle{ W }[/math] are scalar multiples of [math]\displaystyle{ D }[/math]. These blades can be separated into two sets: positive multiples of [math]\displaystyle{ D }[/math] and negative multiples of [math]\displaystyle{ D }[/math]. The positive multiples of [math]\displaystyle{ D }[/math] are said to have the same orientation as [math]\displaystyle{ D }[/math], and the negative multiples the opposite orientation.

Blades are important since geometric operations such as projections, rotations and reflections depend on the factorability via the exterior product that (the restricted class of) [math]\displaystyle{ n }[/math]-blades provide but that (the generalized class of) grade-[math]\displaystyle{ n }[/math] multivectors do not when [math]\displaystyle{ n \ge 4 }[/math].

Unit pseudoscalars

Unit pseudoscalars are blades that play important roles in GA. A unit pseudoscalar for a non-degenerate subspace [math]\displaystyle{ W }[/math] of [math]\displaystyle{ V }[/math] is a blade that is the product of the members of an orthonormal basis for [math]\displaystyle{ W }[/math]. It can be shown that if [math]\displaystyle{ I }[/math] and [math]\displaystyle{ I' }[/math] are both unit pseudoscalars for [math]\displaystyle{ W }[/math], then [math]\displaystyle{ I = \pm I' }[/math] and [math]\displaystyle{ I^2 = \pm 1 }[/math]. If one doesn't choose an orthonormal basis for [math]\displaystyle{ W }[/math], then the Plücker embedding gives a vector in the exterior algebra but only up to scaling. Using the vector space isomorphism between the geometric algebra and exterior algebra, this gives the equivalence class of [math]\displaystyle{ \alpha I }[/math] for all [math]\displaystyle{ \alpha \neq 0 }[/math]. Orthonormality gets rid of this ambiguity except for the signs above.

Suppose the geometric algebra [math]\displaystyle{ \mathcal{G}(n,0) }[/math] with the familiar positive definite inner product on [math]\displaystyle{ \R^n }[/math] is formed. Given a plane (two-dimensional subspace) of [math]\displaystyle{ \R^n }[/math], one can find an orthonormal basis [math]\displaystyle{ \{ b_1, b_2 \} }[/math] spanning the plane, and thus find a unit pseudoscalar [math]\displaystyle{ I = b_1 b_2 }[/math] representing this plane. The geometric product of any two vectors in the span of [math]\displaystyle{ b_1 }[/math] and [math]\displaystyle{ b_2 }[/math] lies in [math]\displaystyle{ \{ \alpha_0 + \alpha_1 I \mid \alpha_i \in \R \} }[/math], that is, it is the sum of a [math]\displaystyle{ 0 }[/math]-vector and a [math]\displaystyle{ 2 }[/math]-vector.

By the properties of the geometric product, [math]\displaystyle{ I^2 = b_1 b_2 b_1 b_2 = -b_1 b_2 b_2 b_1 = -1 }[/math]. The resemblance to the imaginary unit is not incidental: the subspace [math]\displaystyle{ \{ \alpha_0 + \alpha_1 I \mid \alpha_i \in \R \} }[/math] is [math]\displaystyle{ \R }[/math]-algebra isomorphic to the complex numbers. In this way, a copy of the complex numbers is embedded in the geometric algebra for each two-dimensional subspace of [math]\displaystyle{ V }[/math] on which the quadratic form is definite.

It is sometimes possible to identify the presence of an imaginary unit in a physical equation. Such units arise from one of the many quantities in the real algebra that square to [math]\displaystyle{ -1 }[/math], and these have geometric significance because of the properties of the algebra and the interaction of its various subspaces.

In [math]\displaystyle{ \mathcal{G}(3,0) }[/math], a further familiar case occurs. Given a canonical basis consisting of orthonormal vectors [math]\displaystyle{ e_i }[/math] of [math]\displaystyle{ V }[/math], the set of all [math]\displaystyle{ 2 }[/math]-vectors is spanned by

[math]\displaystyle{ \{ e_3 e_2 , e_1 e_3 , e_2 e_1 \} . }[/math]

Labelling these [math]\displaystyle{ i }[/math], [math]\displaystyle{ j }[/math] and [math]\displaystyle{ k }[/math] (momentarily deviating from our uppercase convention), the subspace generated by [math]\displaystyle{ 0 }[/math]-vectors and [math]\displaystyle{ 2 }[/math]-vectors is exactly [math]\displaystyle{ \{ \alpha_0 + i \alpha_1 + j \alpha_2 + k \alpha_3 \mid \alpha_i \in \R\} }[/math]. This set is seen to be the even subalgebra of [math]\displaystyle{ \mathcal{G}(3,0) }[/math], and furthermore is isomorphic as an [math]\displaystyle{ \R }[/math]-algebra to the quaternions, another important algebraic system.

Extensions of the inner and exterior products

It is common practice to extend the exterior product on vectors to the entire algebra. This may be done through the use of the above mentioned grade projection operator:

[math]\displaystyle{ C \wedge D := \sum_{r,s}\langle \langle C \rangle_r \langle D \rangle_s \rangle_{r+s} }[/math]     (the exterior product)

This generalization is consistent with the above definition involving antisymmetrization. Another generalization related to the exterior product is the commutator product:

[math]\displaystyle{ C \times D := \tfrac{1}{2}(CD-DC) }[/math]     (the commutator product)

The regressive product (usually referred to as the "meet") is the dual of the exterior product (or "join" in this context).[lower-alpha 8] The dual specification of elements permits, for blades [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math], the intersection (or meet) where the duality is to be taken relative to the smallest grade blade containing both [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] (the join).[24]

[math]\displaystyle{ C \vee D := ((CI^{-1}) \wedge (DI^{-1}))I }[/math]

with [math]\displaystyle{ I }[/math] the unit pseudoscalar of the algebra. The regressive product, like the exterior product, is associative.[25]

The inner product on vectors can also be generalized, but in more than one non-equivalent way. The paper (Dorst 2002) gives a full treatment of several different inner products developed for geometric algebras and their interrelationships, and the notation is taken from there. Many authors use the same symbol as for the inner product of vectors for their chosen extension (e.g. Hestenes and Perwass). No consistent notation has emerged.

Among these several different generalizations of the inner product on vectors are:

[math]\displaystyle{ C \;\rfloor\; D := \sum_{r,s}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{s-r} }[/math]   (the left contraction)
[math]\displaystyle{ C \;\lfloor\; D := \sum_{r,s}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{r-s} }[/math]   (the right contraction)
[math]\displaystyle{ C * D := \sum_{r,s}\langle \langle C \rangle_r \langle D \rangle_s \rangle_{0} }[/math]   (the scalar product)
[math]\displaystyle{ C \bullet D := \sum_{r,s}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{|s-r|} }[/math]   (the "(fat) dot" product)[lower-alpha 9]

(Dorst 2002) makes an argument for the use of contractions in preference to Hestenes's inner product; they are algebraically more regular and have cleaner geometric interpretations. A number of identities incorporating the contractions are valid without restriction of their inputs. For example,

[math]\displaystyle{ C \;\rfloor\; D = ( C \wedge ( D I^{-1} ) ) I }[/math]
[math]\displaystyle{ C \;\lfloor\; D = I ( ( I^{-1} C) \wedge D ) }[/math]
[math]\displaystyle{ ( A \wedge B ) * C = A * ( B \;\rfloor\; C ) }[/math]
[math]\displaystyle{ C * ( B \wedge A ) = ( C \;\lfloor\; B ) * A }[/math]
[math]\displaystyle{ A \;\rfloor\; ( B \;\rfloor\; C ) = ( A \wedge B ) \;\rfloor\; C }[/math]
[math]\displaystyle{ ( A \;\rfloor\; B ) \;\lfloor\; C = A \;\rfloor\; ( B \;\lfloor\; C ) . }[/math]

Benefits of using the left contraction as an extension of the inner product on vectors include that the identity [math]\displaystyle{ ab = a \cdot b + a \wedge b }[/math] is extended to [math]\displaystyle{ aB = a \;\rfloor\; B + a \wedge B }[/math] for any vector [math]\displaystyle{ a }[/math] and multivector [math]\displaystyle{ B }[/math], and that the projection operation [math]\displaystyle{ \mathcal{P}_b (a) = (a \cdot b^{-1})b }[/math] is extended to [math]\displaystyle{ \mathcal{P}_B (A) = (A \;\rfloor\; B^{-1}) \;\rfloor\; B }[/math] for any blade [math]\displaystyle{ B }[/math] and any multivector [math]\displaystyle{ A }[/math] (with a minor modification to accommodate null [math]\displaystyle{ B }[/math], given below).

Dual basis

Let [math]\displaystyle{ \{ e_1 , \ldots , e_n \} }[/math] be a basis of [math]\displaystyle{ V }[/math], i.e. a set of [math]\displaystyle{ n }[/math] linearly independent vectors that span the [math]\displaystyle{ n }[/math]-dimensional vector space [math]\displaystyle{ V }[/math]. The basis that is dual to [math]\displaystyle{ \{ e_1 , \ldots , e_n \} }[/math] is the set of elements of the dual vector space [math]\displaystyle{ V^{*} }[/math] that forms a biorthogonal system with this basis, thus being the elements denoted [math]\displaystyle{ \{ e^1 , \ldots , e^n \} }[/math] satisfying

[math]\displaystyle{ e^i \cdot e_j = \delta^i{}_j, }[/math]

where [math]\displaystyle{ \delta }[/math] is the Kronecker delta.

Given a nondegenerate quadratic form on [math]\displaystyle{ V }[/math], [math]\displaystyle{ V^{*} }[/math] becomes naturally identified with [math]\displaystyle{ V }[/math], and the dual basis may be regarded as elements of [math]\displaystyle{ V }[/math], but are not in general the same set as the original basis.

Given further a GA of [math]\displaystyle{ V }[/math], let

[math]\displaystyle{ I = e_1 \wedge \cdots \wedge e_n }[/math]

be the pseudoscalar (which does not necessarily square to [math]\displaystyle{ \pm 1 }[/math]) formed from the basis [math]\displaystyle{ \{ e_1 , \ldots , e_n \} }[/math]. The dual basis vectors may be constructed as

[math]\displaystyle{ e^i=(-1)^{i-1}(e_1 \wedge \cdots \wedge \check{e}_i \wedge \cdots \wedge e_n) I^{-1}, }[/math]

where the [math]\displaystyle{ \check{e}_i }[/math] denotes that the [math]\displaystyle{ i }[/math]th basis vector is omitted from the product.

A dual basis is also known as a reciprocal basis or reciprocal frame.

A major usage of a dual basis is to separate vectors into components. Given a vector [math]\displaystyle{ a }[/math], scalar components [math]\displaystyle{ a^i }[/math] can be defined as

[math]\displaystyle{ a^i=a\cdot e^i\ , }[/math]

in terms of which [math]\displaystyle{ a }[/math] can be separated into vector components as

[math]\displaystyle{ a=\sum_i a^i e_i\ . }[/math]

We can also define scalar components [math]\displaystyle{ a_i }[/math] as

[math]\displaystyle{ a_i=a\cdot e_i\ , }[/math]

in terms of which [math]\displaystyle{ a }[/math] can be separated into vector components in terms of the dual basis as

[math]\displaystyle{ a=\sum_i a_i e^i\ . }[/math]

A dual basis as defined above for the vector subspace of a geometric algebra can be extended to cover the entire algebra.[26] For compactness, we'll use a single capital letter to represent an ordered set of vector indices. I.e., writing

[math]\displaystyle{ J=(j_1,\dots ,j_n)\ , }[/math]

where [math]\displaystyle{ j_1 \lt j_2 \lt \dots \lt j_n, }[/math] we can write a basis blade as

[math]\displaystyle{ e_J=e_{j_1}\wedge e_{j_2}\wedge\cdots\wedge e_{j_n}\ . }[/math]

The corresponding reciprocal blade has the indices in opposite order:

[math]\displaystyle{ e^J=e^{j_n}\wedge\cdots \wedge e^{j_2}\wedge e^{j_1}\ . }[/math]

Similar to the case above with vectors, it can be shown that

[math]\displaystyle{ e^J * e_K=\delta^J_K\ , }[/math]

where [math]\displaystyle{ * }[/math] is the scalar product.

With [math]\displaystyle{ A }[/math] a multivector, we can define scalar components as[27]

[math]\displaystyle{ A^{ij\cdots k}=(e^k\wedge\cdots\wedge e^j\wedge e^i)*A\ , }[/math]

in terms of which [math]\displaystyle{ A }[/math] can be separated into component blades as

[math]\displaystyle{ A=\sum_{i\lt j\lt \cdots\lt k} A^{ij\cdots k} e_i\wedge e_j\wedge\cdots \wedge e_k\ . }[/math]

We can alternatively define scalar components

[math]\displaystyle{ A_{ij\cdots k}=(e_k\wedge\cdots\wedge e_j\wedge e_i)*A\ , }[/math]

in terms of which [math]\displaystyle{ A }[/math] can be separated into component blades as

[math]\displaystyle{ A=\sum_{i\lt j\lt \cdots\lt k} A_{ij\cdots k} e^i\wedge e^j\wedge\cdots \wedge e^k\ . }[/math]

Linear functions

Although a versor is easier to work with because it can be directly represented in the algebra as a multivector, versors are a subgroup of linear functions on multivectors, which can still be used when necessary. The geometric algebra of an [math]\displaystyle{ n }[/math]-dimensional vector space is spanned by a basis of [math]\displaystyle{ 2^n }[/math] elements. If a multivector is represented by a [math]\displaystyle{ 2^n \times 1 }[/math] real column matrix of coefficients of a basis of the algebra, then all linear transformations of the multivector can be expressed as the matrix multiplication by a [math]\displaystyle{ 2^n \times 2^n }[/math] real matrix. However, such a general linear transformation allows arbitrary exchanges among grades, such as a "rotation" of a scalar into a vector, which has no evident geometric interpretation.

A general linear transformation from vectors to vectors is of interest. With the natural restriction to preserving the induced exterior algebra, the outermorphism of the linear transformation is the unique[lower-alpha 10] extension of the versor. If [math]\displaystyle{ f }[/math] is a linear function that maps vectors to vectors, then its outermorphism is the function that obeys the rule

[math]\displaystyle{ \underline{\mathsf{f}}(a_1 \wedge a_2 \wedge \cdots \wedge a_r) = f(a_1) \wedge f(a_2) \wedge \cdots \wedge f(a_r) }[/math]

for a blade, extended to the whole algebra through linearity.

Modeling geometries

Although a lot of attention has been placed on CGA, it is to be noted that GA is not just one algebra, it is one of a family of algebras with the same essential structure.[28]

Vector space model

Main page: Comparison of vector algebra and geometric algebra

The even subalgebra of [math]\displaystyle{ \mathcal G(2,0) }[/math] is isomorphic to the complex numbers, as may be seen by writing a vector [math]\displaystyle{ P }[/math] in terms of its components in an orthonormal basis and left multiplying by the basis vector [math]\displaystyle{ e_1 }[/math], yielding

[math]\displaystyle{ Z = e_1 P = e_1 ( x e_1 + y e_2) = x (1) + y ( e_1 e_2) , }[/math]

where we identify [math]\displaystyle{ i \mapsto e_1e_2 }[/math] since

[math]\displaystyle{ (e_1 e_2)^2 = e_1 e_2 e_1 e_2 = -e_1 e_1 e_2 e_2 = -1 . }[/math]

Similarly, the even subalgebra of [math]\displaystyle{ \mathcal G(3,0) }[/math] with basis [math]\displaystyle{ \{1, e_2 e_3, e_3 e_1, e_1 e_2 \} }[/math] is isomorphic to the quaternions as may be seen by identifying [math]\displaystyle{ i \mapsto -e_2 e_3 }[/math], [math]\displaystyle{ j \mapsto -e_3 e_1 }[/math] and [math]\displaystyle{ k \mapsto -e_1 e_2 }[/math].

Every associative algebra has a matrix representation; replacing the three Cartesian basis vectors by the Pauli matrices gives a representation of [math]\displaystyle{ \mathcal G(3,0) }[/math]:

[math]\displaystyle{ \begin{align} e_1 = \sigma_1 = \sigma_x &= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \\ e_2 = \sigma_2 = \sigma_y &= \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \\ e_3 =\sigma_3 = \sigma_z &= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \,. \end{align} }[/math]

Dotting the "Pauli vector" (a dyad):

[math]\displaystyle{ \sigma = \sigma_1 e_1 + \sigma_2 e_2 + \sigma_3 e_3 }[/math] with arbitrary vectors [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] and multiplying through gives:
[math]\displaystyle{ (\sigma \cdot a)(\sigma \cdot b) = a \cdot b + a \wedge b }[/math] (Equivalently, by inspection, [math]\displaystyle{ a \cdot b + i \sigma \cdot ( a \times b ) }[/math])

Spacetime model

In physics, the main applications are the geometric algebra of Minkowski 3+1 spacetime, [math]\displaystyle{ \mathcal{G}(1,3) }[/math], called spacetime algebra (STA),[3] or less commonly, [math]\displaystyle{ \mathcal{G}(3,0) }[/math], interpreted the algebra of physical space (APS).

While in STA, points of spacetime are represented simply by vectors, in APS, points of [math]\displaystyle{ (3+1) }[/math]-dimensional spacetime are instead represented by paravectors, a three-dimensional vector (space) plus a one-dimensional scalar (time).

In spacetime algebra the electromagnetic field tensor has a bivector representation [math]\displaystyle{ {F} = ({E} + i c {B})\gamma_0 }[/math].[29] Here, the [math]\displaystyle{ i = \gamma_0 \gamma_1 \gamma_2 \gamma_3 }[/math] is the unit pseudoscalar (or four-dimensional volume element), [math]\displaystyle{ \gamma_0 }[/math] is the unit vector in time direction, and [math]\displaystyle{ E }[/math] and [math]\displaystyle{ B }[/math] are the classic electric and magnetic field vectors (with a zero time component). Using the four-current [math]\displaystyle{ {J} }[/math], Maxwell's equations then become

Formulation Homogeneous equations Non-homogeneous equations
Fields [math]\displaystyle{ D F = \mu_0 J }[/math]
[math]\displaystyle{ D \wedge F = 0 }[/math] [math]\displaystyle{ D ~\rfloor~ F = \mu_0 J }[/math]
Potentials (any gauge) [math]\displaystyle{ F = D \wedge A }[/math] [math]\displaystyle{ D ~\rfloor~ (D \wedge A) = \mu_0 J }[/math]
Potentials (Lorenz gauge) [math]\displaystyle{ F = D A }[/math]

[math]\displaystyle{ D ~\rfloor~ A = 0 }[/math]

[math]\displaystyle{ D^2 A = \mu_0 J }[/math]

In geometric calculus, juxtaposition of vectors such as in [math]\displaystyle{ DF }[/math] indicate the geometric product and can be decomposed into parts as [math]\displaystyle{ DF = D ~\rfloor~ F + D \wedge F }[/math]. Here [math]\displaystyle{ D }[/math] is the covector derivative in any spacetime and reduces to [math]\displaystyle{ \nabla }[/math] in flat spacetime. Where [math]\displaystyle{ \bigtriangledown }[/math] plays a role in Minkowski [math]\displaystyle{ 4 }[/math]-spacetime which is synonymous to the role of [math]\displaystyle{ \nabla }[/math] in Euclidean [math]\displaystyle{ 3 }[/math]-space and is related to the d'Alembertian by [math]\displaystyle{ \Box=\bigtriangledown^2 }[/math]. Indeed, given an observer represented by a future pointing timelike vector [math]\displaystyle{ \gamma_0 }[/math] we have

[math]\displaystyle{ \gamma_0\cdot\bigtriangledown=\frac{1}{c}\frac{\partial}{\partial t} }[/math]
[math]\displaystyle{ \gamma_0\wedge\bigtriangledown=\nabla }[/math]

Boosts in this Lorentzian metric space have the same expression [math]\displaystyle{ e^{{\beta}} }[/math] as rotation in Euclidean space, where [math]\displaystyle{ {\beta} }[/math] is the bivector generated by the time and the space directions involved, whereas in the Euclidean case it is the bivector generated by the two space directions, strengthening the "analogy" to almost identity.

The Dirac matrices are a representation of [math]\displaystyle{ \mathcal G(1,3) }[/math], showing the equivalence with matrix representations used by physicists.

Homogeneous models

Homogeneous models generally refer to a projective representation in which the elements of the one-dimensional subspaces of a vector space represent points of a geometry.

In a geometric algebra of a space of [math]\displaystyle{ n }[/math] dimensions, the rotors represent a set of transformations with [math]\displaystyle{ n(n-1)/2 }[/math] degrees of freedom, corresponding to rotations – for example, [math]\displaystyle{ 3 }[/math] when [math]\displaystyle{ n=3 }[/math] and [math]\displaystyle{ 6 }[/math] when [math]\displaystyle{ n=4 }[/math]. Geometric algebra is often used to model a projective space, i.e. as a homogeneous model: a point, line, plane, etc. is represented by an equivalence class of elements of the algebra that differ by an invertible scalar factor.

The rotors in a space of dimension [math]\displaystyle{ n+1 }[/math] have [math]\displaystyle{ n(n-1)/2+n }[/math] degrees of freedom, the same as the number of degrees of freedom in the rotations and translations combined for an [math]\displaystyle{ n }[/math]-dimensional space.

This is the case in Projective Geometric Algebra (PGA), which is used[30][31][32] to represent Euclidean isometries in Euclidean geometry (thereby covering the large majority of engineering applications of geometry). In this model, a degenerate dimension is added to the three Euclidean dimensions to form the algebra [math]\displaystyle{ \mathcal G(3,0,1) }[/math]. With a suitable identification of subspaces to represent points, lines and planes, the versors of this algebra represent all proper Euclidean isometries, which are always screw motions in 3-dimensional space, along with all improper Euclidean isometries, which includes reflections, rotoreflections, transflections, and point reflections.

PGA combines [math]\displaystyle{ \mathcal G(3,0,1) }[/math] with a dual operator to obtain meet, join, distance, and angle formulae. Depending on the author,[33][34] this could mean the Hodge star or the projective dual, though both result in identical equations being derived, albeit with different notation. In effect, the dual switches basis vectors that are present and absent in the expression of each term of the algebraic representation. For example, in the PGA or 3-dimensional space, the dual of the line [math]\displaystyle{ \boldsymbol{e}_{12} }[/math] is the line [math]\displaystyle{ \boldsymbol{e}_{03} }[/math], because [math]\displaystyle{ \boldsymbol{e}_{ 0} }[/math] and [math]\displaystyle{ \boldsymbol{e}_{ 3} }[/math] are basis elements that are not contained in [math]\displaystyle{ \boldsymbol{e}_{12} }[/math] but are contained in [math]\displaystyle{ \boldsymbol{e}_{03} }[/math]. In the PGA of 2-dimensional space, the dual of [math]\displaystyle{ \boldsymbol{e}_{12} }[/math] is [math]\displaystyle{ \boldsymbol{e}_{0} }[/math], since there is no [math]\displaystyle{ \boldsymbol{e}_{3} }[/math] element.

PGA is a widely used system that combines geometric algebra with homogeneous representations in geometry, but there exist several other such systems. The conformal model discussed below is homogeneous, as is "Conic Geometric Algebra",[35] and see Plane-based geometric algebra for discussion of homogeneous models of elliptic and hyperbolic geometry compared with the euclidean geometry derived from PGA.

Conformal model

Main page: Conformal geometric algebra
Conformal Embedding.svg

Working within GA, Euclidean space [math]\displaystyle{ \mathcal E^3 }[/math] (along with a conformal point at infinity) is embedded projectively in the CGA [math]\displaystyle{ \mathcal{G}(4,1) }[/math] via the identification of Euclidean points with 1D subspaces in the 4D null cone of the 5D CGA vector subspace. This allows all conformal transformations to be performed as rotations and reflections and is covariant, extending incidence relations of projective geometry to circles and spheres.

Specifically, we add orthogonal basis vectors [math]\displaystyle{ e_+ }[/math] and [math]\displaystyle{ e_- }[/math] such that [math]\displaystyle{ e_+^2 = +1 }[/math] and [math]\displaystyle{ e_-^2 = -1 }[/math] to the basis of the vector space that generates [math]\displaystyle{ \mathcal{G}(3,0) }[/math] and identify null vectors

[math]\displaystyle{ n_{\infty} = e_- + e_+ }[/math] as a conformal point at infinity (see Compactification) and
[math]\displaystyle{ n_\text{o} = \tfrac{1}{2}(e_- - e_+) }[/math] as the point at the origin, giving
[math]\displaystyle{ n_\infty \cdot n_\text{o} = -1 . }[/math]

This procedure has some similarities to the procedure for working with homogeneous coordinates in projective geometry and in this case allows the modeling of Euclidean transformations of [math]\displaystyle{ \mathbf{R}^3 }[/math] as orthogonal transformations of a subset of [math]\displaystyle{ \mathbf{R}^{4,1} }[/math].

A fast changing and fluid area of GA, CGA is also being investigated for applications to relativistic physics.

Table of models

Note in this list that [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] can be swapped and the same name applies; for example, with relatively little change occurring, see sign convention. For example, [math]\displaystyle{ \mathcal{G}(3, 1, 0) }[/math] and [math]\displaystyle{ \mathcal{G}(1, 3, 0) }[/math] are both referred to as Spacetime Algebra.[36]

Signature Names and acronyms Blades, eg oriented geometric objects that algebra can represent Rotors, eg orientation-preserving transformations that the algebra can represent Notes
[math]\displaystyle{ \mathcal{G}(3,0,0) }[/math] Vectorspace GA, VGA

Algebra of Physical Space, APS

Planes and lines through the origin Rotations, eg [math]\displaystyle{ \mathrm{SO} (3) }[/math] First GA to be discovered[by whom?]
[math]\displaystyle{ \mathcal{G}(3,0,1) }[/math] Plane-based GA, Projective GA, PGA Planes, lines, and points anywhere in space Rotations and translations, eg rigid motions, [math]\displaystyle{ \mathrm{SE}(3) }[/math] aka [math]\displaystyle{ \mathrm{SO}(3,0,1) }[/math] Slight modifications to the signature allow for the modelling of hyperbolic and elliptic space, see main article. Cannot model the entire "projective" group.
[math]\displaystyle{ \mathcal{G}(3,1,0) }[/math] Spacetime Algebra, STA Volumes, planes and lines through the origin in spacetime Rotations and spacetime boosts, e.g. [math]\displaystyle{ \mathrm{SO}(3,1) }[/math], the Lorentz group Basis for Gauge Theory Gravity.
[math]\displaystyle{ \mathcal{G}(3,1,1) }[/math] Spacetime Algebra Projectivized,[37] STAP Volumes, planes, lines, and points (events) in spacetime Rotations, translations, and spacetime boosts (Poincare group)
[math]\displaystyle{ \mathcal{G}(4,1,0) }[/math] Conformal GA, CGA Spheres, circles, point pairs, lines, and planes anywhere in space Transformations of space that preserve angles (Conformal group [math]\displaystyle{ \mathrm{SO}(4,1) }[/math])
[math]\displaystyle{ \mathcal{G}(4,2,0) }[/math] Conformal Spacetime Algebra,[38] CSTA Spheres, circles, planes, lines, light-cones, trajectories of objects with constant acceleration, all in spacetime Conformal transformations of spacetime, e.g. transformations that preserve rapidity along arclengths through spacetime Related to Twistor theory.
[math]\displaystyle{ \mathcal{G}(3,3,0) }[/math] Mother Algebra[39] Unknown Projective group
[math]\displaystyle{ \mathcal{G}(5,3,0) }[/math] GA for Conics, GAC

Quadric Conformal 2D GA QC2GA[40][41]

Points, point pair/triple/quadruple, Conic, Pencil of up to 6 independent conics. Reflections, translations, rotations, dilations, others Conics can be created from control points and pencils of conics.
[math]\displaystyle{ \mathcal{G}(9,6,0) }[/math] Quadric Conformal GA, QCGA[42] Points, tuples of up to 8 points, quadric surfaces, conics, conics on quadratic surfaces (such as Spherical conic), pencils of up to 9 quadric surfaces. Reflections, translations, rotations, dilations, others Quadric surfaces can be created from control points and their surface normals can be determined.
[math]\displaystyle{ \mathcal{G}(8,2,0) }[/math] Double Conformal Geometric Algebra (DCGA)[43] Points, Darboux Cyclides, quadrics surfaces Reflections, translations, rotations, dilations, others Uses bivectors of two independent CGA basis to represents 5x5 symmetric "matrices" of 15 unique coefficients. This is at the cost of the ability to perform intersections and construction by points.

Geometric interpretation of Vectorspace Geometric Algebra

Projection and rejection

In 3D space, a bivector [math]\displaystyle{ a \land b }[/math] defines a 2D plane subspace (light blue, extends infinitely in indicated directions). Any vector [math]\displaystyle{ c }[/math] in 3D space can be decomposed into its projection [math]\displaystyle{ c_\| }[/math] onto a plane and its rejection [math]\displaystyle{ c_\perp }[/math] from this plane.

For any vector [math]\displaystyle{ a }[/math] and any invertible vector [math]\displaystyle{ m }[/math],

[math]\displaystyle{ a = amm^{-1} = (a\cdot m + a \wedge m)m^{-1} = a_{\| m} + a_{\perp m} , }[/math]

where the projection of [math]\displaystyle{ a }[/math] onto [math]\displaystyle{ m }[/math] (or the parallel part) is

[math]\displaystyle{ a_{\| m} = (a \cdot m)m^{-1} }[/math]

and the rejection of [math]\displaystyle{ a }[/math] from [math]\displaystyle{ m }[/math] (or the orthogonal part) is

[math]\displaystyle{ a_{\perp m} = a - a_{\| m} = (a\wedge m)m^{-1} . }[/math]

Using the concept of a [math]\displaystyle{ k }[/math]-blade [math]\displaystyle{ B }[/math] as representing a subspace of [math]\displaystyle{ V }[/math] and every multivector ultimately being expressed in terms of vectors, this generalizes to projection of a general multivector onto any invertible [math]\displaystyle{ k }[/math]-blade [math]\displaystyle{ B }[/math] as[lower-alpha 11]

[math]\displaystyle{ \mathcal{P}_B (A) = (A \;\rfloor\; B) \;\rfloor\; B^{-1} , }[/math]

with the rejection being defined as

[math]\displaystyle{ \mathcal{P}_B^\perp (A) = A - \mathcal{P}_B (A) . }[/math]

The projection and rejection generalize to null blades [math]\displaystyle{ B }[/math] by replacing the inverse [math]\displaystyle{ B^{-1} }[/math] with the pseudoinverse [math]\displaystyle{ B^{+} }[/math] with respect to the contractive product.[lower-alpha 12] The outcome of the projection coincides in both cases for non-null blades.[44][45] For null blades [math]\displaystyle{ B }[/math], the definition of the projection given here with the first contraction rather than the second being onto the pseudoinverse should be used,[lower-alpha 13] as only then is the result necessarily in the subspace represented by [math]\displaystyle{ B }[/math].[44] The projection generalizes through linearity to general multivectors [math]\displaystyle{ A }[/math].[lower-alpha 14] The projection is not linear in [math]\displaystyle{ B }[/math] and does not generalize to objects [math]\displaystyle{ B }[/math] that are not blades.

Reflection

Simple reflections in a hyperplane are readily expressed in the algebra through conjugation with a single vector. These serve to generate the group of general rotoreflections and rotations.

Reflection of vector [math]\displaystyle{ c }[/math] along a vector [math]\displaystyle{ m }[/math]. Only the component of [math]\displaystyle{ c }[/math] parallel to [math]\displaystyle{ m }[/math] is negated.

The reflection [math]\displaystyle{ c' }[/math] of a vector [math]\displaystyle{ c }[/math] along a vector [math]\displaystyle{ m }[/math], or equivalently in the hyperplane orthogonal to [math]\displaystyle{ m }[/math], is the same as negating the component of a vector parallel to [math]\displaystyle{ m }[/math]. The result of the reflection will be

[math]\displaystyle{ c' = {-c_{\| m} + c_{\perp m}} = {-(c \cdot m)m^{-1} + (c \wedge m)m^{-1}} = {(-m \cdot c - m \wedge c)m^{-1}} = -mcm^{-1} }[/math]

This is not the most general operation that may be regarded as a reflection when the dimension [math]\displaystyle{ n \ge 4 }[/math]. A general reflection may be expressed as the composite of any odd number of single-axis reflections. Thus, a general reflection [math]\displaystyle{ a' }[/math] of a vector [math]\displaystyle{ a }[/math] may be written

[math]\displaystyle{ a \mapsto a' = -MaM^{-1} , }[/math]

where

[math]\displaystyle{ M = pq \cdots r }[/math] and [math]\displaystyle{ M^{-1} = (pq \cdots r)^{-1} = r^{-1} \cdots q^{-1}p^{-1} . }[/math]

If we define the reflection along a non-null vector [math]\displaystyle{ m }[/math] of the product of vectors as the reflection of every vector in the product along the same vector, we get for any product of an odd number of vectors that, by way of example,

[math]\displaystyle{ (abc)' = a'b'c' = (-mam^{-1})(-mbm^{-1})(-mcm^{-1}) = -ma(m^{-1}m)b(m^{-1}m)cm^{-1} = -mabcm^{-1} \, }[/math]

and for the product of an even number of vectors that

[math]\displaystyle{ (abcd)' = a'b'c'd' = (-mam^{-1})(-mbm^{-1})(-mcm^{-1})(-mdm^{-1}) = mabcdm^{-1} . }[/math]

Using the concept of every multivector ultimately being expressed in terms of vectors, the reflection of a general multivector [math]\displaystyle{ A }[/math] using any reflection versor [math]\displaystyle{ M }[/math] may be written

[math]\displaystyle{ A \mapsto M\alpha(A)M^{-1} , }[/math]

where [math]\displaystyle{ \alpha }[/math] is the automorphism of reflection through the origin of the vector space ([math]\displaystyle{ v \mapsto -v }[/math]) extended through linearity to the whole algebra.

Rotations

A rotor that rotates vectors in a plane rotates vectors through angle [math]\displaystyle{ \theta }[/math], that is [math]\displaystyle{ x \mapsto R_\theta x \tilde R_\theta }[/math] is a rotation of [math]\displaystyle{ x }[/math] through angle [math]\displaystyle{ \theta }[/math]. The angle between [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math] is [math]\displaystyle{ \theta/2 }[/math]. Similar interpretations are valid for a general multivector [math]\displaystyle{ X }[/math] instead of the vector [math]\displaystyle{ x }[/math].[10]

If we have a product of vectors [math]\displaystyle{ R = a_1a_2 \cdots a_r }[/math] then we denote the reverse as

[math]\displaystyle{ \tilde R = a_r\cdots a_2 a_1. }[/math]

As an example, assume that [math]\displaystyle{ R = ab }[/math] we get

[math]\displaystyle{ R\tilde R = abba = ab^2a = a^2b^2 = ba^2b = baab = \tilde RR. }[/math]

Scaling [math]\displaystyle{ R }[/math] so that [math]\displaystyle{ R\tilde R = 1 }[/math] then

[math]\displaystyle{ (Rv\tilde R)^2 = Rv^{2}\tilde R = v^2R\tilde R = v^2 }[/math]

so [math]\displaystyle{ Rv\tilde R }[/math] leaves the length of [math]\displaystyle{ v }[/math] unchanged. We can also show that

[math]\displaystyle{ (Rv_1\tilde R) \cdot (Rv_2\tilde R) = v_1 \cdot v_2 }[/math]

so the transformation [math]\displaystyle{ Rv\tilde R }[/math] preserves both length and angle. It therefore can be identified as a rotation or rotoreflection; [math]\displaystyle{ R }[/math] is called a rotor if it is a proper rotation (as it is if it can be expressed as a product of an even number of vectors) and is an instance of what is known in GA as a versor.

There is a general method for rotating a vector involving the formation of a multivector of the form [math]\displaystyle{ R = e^{-B \theta / 2} }[/math] that produces a rotation [math]\displaystyle{ \theta }[/math] in the plane and with the orientation defined by a [math]\displaystyle{ 2 }[/math]-blade [math]\displaystyle{ B }[/math].

Rotors are a generalization of quaternions to [math]\displaystyle{ n }[/math]-dimensional spaces.

Examples and applications

Hypervolume of a parallelotope spanned by vectors

For vectors [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] spanning a parallelogram we have

[math]\displaystyle{ a \wedge b = ((a \wedge b) b^{-1}) b = a_{\perp b} b }[/math]

with the result that [math]\displaystyle{ a \wedge b }[/math] is linear in the product of the "altitude" and the "base" of the parallelogram, that is, its area.

Similar interpretations are true for any number of vectors spanning an [math]\displaystyle{ n }[/math]-dimensional parallelotope; the exterior product of vectors [math]\displaystyle{ a_1, a_2, \ldots , a_n }[/math], that is [math]\displaystyle{ \textstyle \bigwedge_{i=1}^n a_i }[/math], has a magnitude equal to the volume of the [math]\displaystyle{ n }[/math]-parallelotope. An [math]\displaystyle{ n }[/math]-vector does not necessarily have a shape of a parallelotope – this is a convenient visualization. It could be any shape, although the volume equals that of the parallelotope.

Intersection of a line and a plane

A line L defined by points T and P (which we seek) and a plane defined by a bivector B containing points P and Q.

We may define the line parametrically by [math]\displaystyle{ p = t + \alpha \ v }[/math] where [math]\displaystyle{ p }[/math] and [math]\displaystyle{ t }[/math] are position vectors for points P and T and [math]\displaystyle{ v }[/math] is the direction vector for the line.

Then

[math]\displaystyle{ B \wedge (p-q) = 0 }[/math] and [math]\displaystyle{ B \wedge (t + \alpha v - q) = 0 }[/math]

so

[math]\displaystyle{ \alpha = \frac{B \wedge(q-t)}{B \wedge v} }[/math]

and

[math]\displaystyle{ p = t + \left(\frac{B \wedge (q-t)}{B \wedge v}\right) v. }[/math]

Rotating systems

The mathematical description of rotational forces such as torque and angular momentum often makes use of the cross product of vector calculus in three dimensions with a convention of orientation (which defines handedness).

The cross product in relation to the exterior product. In red are the unit normal vector, and the "parallel" unit bivector.

The cross product can be viewed in terms of the exterior product allowing a more natural geometric interpretation of the cross product as a bivector using the dual relationship

[math]\displaystyle{ a \times b = -I (a \wedge b) . }[/math]

For example, torque is generally defined as the magnitude of the perpendicular force component times distance, or work per unit angle.

Suppose a circular path in an arbitrary plane containing orthonormal vectors [math]\displaystyle{ \hat{u} }[/math] and [math]\displaystyle{ \hat{v} }[/math] is parameterized by angle.

[math]\displaystyle{ \mathbf{r} = r(\hat{u} \cos \theta + \hat{v} \sin \theta) = r \hat{u}(\cos \theta + \hat{u} \hat{v} \sin \theta) }[/math]

By designating the unit bivector of this plane as the imaginary number

[math]\displaystyle{ {i} = \hat{u} \hat{v} = \hat{u} \wedge \hat{v} }[/math]
[math]\displaystyle{ i^2 = -1 }[/math]

this path vector can be conveniently written in complex exponential form

[math]\displaystyle{ \mathbf{r} = r \hat{ u} e^{i\theta} }[/math]

and the derivative with respect to angle is

[math]\displaystyle{ \frac{d \mathbf{r}}{d\theta} = r \hat{u} i e^{i\theta} = \mathbf{r} i . }[/math]

So the torque, the rate of change of work [math]\displaystyle{ W }[/math], due to a force [math]\displaystyle{ F }[/math], is

[math]\displaystyle{ \tau = \frac{dW}{d\theta} = F \cdot \frac{dr}{d\theta} = F \cdot (\mathbf{r} i) . }[/math]

Unlike the cross product description of torque, [math]\displaystyle{ \tau = \mathbf{r} \times F }[/math], the geometric algebra description does not introduce a vector in the normal direction; a vector that does not exist in two and that is not unique in greater than three dimensions. The unit bivector describes the plane and the orientation of the rotation, and the sense of the rotation is relative to the angle between the vectors [math]\displaystyle{ {\hat{u}} }[/math] and [math]\displaystyle{ {\hat{v} } }[/math].

Geometric calculus

Main page: Geometric calculus

Geometric calculus extends the formalism to include differentiation and integration including differential geometry and differential forms.[46]

Essentially, the vector derivative is defined so that the GA version of Green's theorem is true,

[math]\displaystyle{ \int_A dA \,\nabla f = \oint_{\partial A} dx \, f }[/math]

and then one can write

[math]\displaystyle{ \nabla f = \nabla \cdot f + \nabla \wedge f }[/math]

as a geometric product, effectively generalizing Stokes' theorem (including the differential form version of it).

In [math]\displaystyle{ 1D }[/math] when [math]\displaystyle{ A }[/math] is a curve with endpoints [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math], then

[math]\displaystyle{ \int_A dA \,\nabla f = \oint_{\partial A} dx \, f }[/math]

reduces to

[math]\displaystyle{ \int_a^b dx \, \nabla f = \int_a^b dx \cdot \nabla f = \int_a^b df = f(b) -f(a) }[/math]

or the fundamental theorem of integral calculus.

Also developed are the concept of vector manifold and geometric integration theory (which generalizes differential forms).

History

Before the 20th century

Although the connection of geometry with algebra dates as far back at least to Euclid's Elements in the third century B.C. (see Greek geometric algebra), GA in the sense used in this article was not developed until 1844, when it was used in a systematic way to describe the geometrical properties and transformations of a space. In that year, Hermann Grassmann introduced the idea of a geometrical algebra in full generality as a certain calculus (analogous to the propositional calculus) that encoded all of the geometrical information of a space.[47] Grassmann's algebraic system could be applied to a number of different kinds of spaces, the chief among them being Euclidean space, affine space, and projective space. Following Grassmann, in 1878 William Kingdon Clifford examined Grassmann's algebraic system alongside the quaternions of William Rowan Hamilton in (Clifford 1878). From his point of view, the quaternions described certain transformations (which he called rotors), whereas Grassmann's algebra described certain properties (or Strecken such as length, area, and volume). His contribution was to define a new product — the geometric product – on an existing Grassmann algebra, which realized the quaternions as living within that algebra. Subsequently, Rudolf Lipschitz in 1886 generalized Clifford's interpretation of the quaternions and applied them to the geometry of rotations in [math]\displaystyle{ n }[/math] dimensions. Later these developments would lead other 20th-century mathematicians to formalize and explore the properties of the Clifford algebra.

Nevertheless, another revolutionary development of the 19th-century would completely overshadow the geometric algebras: that of vector analysis, developed independently by Josiah Willard Gibbs and Oliver Heaviside. Vector analysis was motivated by James Clerk Maxwell's studies of electromagnetism, and specifically the need to express and manipulate conveniently certain differential equations. Vector analysis had a certain intuitive appeal compared to the rigors of the new algebras. Physicists and mathematicians alike readily adopted it as their geometrical toolkit of choice, particularly following the influential 1901 textbook Vector Analysis by Edwin Bidwell Wilson, following lectures of Gibbs.

In more detail, there have been three approaches to geometric algebra: quaternionic analysis, initiated by Hamilton in 1843 and geometrized as rotors by Clifford in 1878; geometric algebra, initiated by Grassmann in 1844; and vector analysis, developed out of quaternionic analysis in the late 19th century by Gibbs and Heaviside. The legacy of quaternionic analysis in vector analysis can be seen in the use of [math]\displaystyle{ i }[/math], [math]\displaystyle{ j }[/math], [math]\displaystyle{ k }[/math] to indicate the basis vectors of [math]\displaystyle{ \mathbf{R}^3 }[/math]: it is being thought of as the purely imaginary quaternions. From the perspective of geometric algebra, the even subalgebra of the Space Time Algebra is isomorphic to the GA of 3D Euclidean space and quaternions are isomorphic to the even subalgebra of the GA of 3D Euclidean space, which unifies the three approaches.

20th century and present

Progress on the study of Clifford algebras quietly advanced through the twentieth century, although largely due to the work of abstract algebraists such as Élie Cartan, Hermann Weyl and Claude Chevalley. The geometrical approach to geometric algebras has seen a number of 20th-century revivals. In mathematics, Emil Artin's Geometric Algebra[48] discusses the algebra associated with each of a number of geometries, including affine geometry, projective geometry, symplectic geometry, and orthogonal geometry. In physics, geometric algebras have been revived as a "new" way to do classical mechanics and electromagnetism, together with more advanced topics such as quantum mechanics and gauge theory.[5] David Hestenes reinterpreted the Pauli and Dirac matrices as vectors in ordinary space and spacetime, respectively, and has been a primary contemporary advocate for the use of geometric algebra.

In computer graphics and robotics, geometric algebras have been revived in order to efficiently represent rotations and other transformations. For applications of GA in robotics (screw theory, kinematics and dynamics using versors), computer vision, control and neural computing (geometric learning) see Bayro (2010).

See also

Notes

  1. The term outer product used in geometric algebra conflicts with the meaning of outer product elsewhere in mathematics
  2. Given [math]\displaystyle{ u^2 = 1 }[/math], we have that [math]\displaystyle{ (\tfrac{1}{2}(1 + u))^2 }[/math] [math]\displaystyle{ = \tfrac{1}{4}(1 + 2u + uu) }[/math] [math]\displaystyle{ = \tfrac{1}{4}(1 + 2u + 1) }[/math] [math]\displaystyle{ = \tfrac{1}{2}(1 + u) }[/math], showing that [math]\displaystyle{ \tfrac{1}{2}(1 + u) }[/math] is idempotent, and that [math]\displaystyle{ \tfrac{1}{2}(1 + u)(1 - u) }[/math] [math]\displaystyle{ = \tfrac{1}{2}(1 - uu) }[/math] [math]\displaystyle{ = \tfrac{1}{2}(1 - 1) = 0 }[/math], showing that it is a nonzero zero divisor.
  3. This is a synonym for the scalar product of a pseudo-Euclidean vector space, and refers to the symmetric bilinear form on the [math]\displaystyle{ 1 }[/math]-vector subspace, not the inner product on a normed vector space. Some authors may extend the meaning of inner product to the entire algebra, but there is little consensus on this. Even in texts on geometric algebras, the term is not universally used.
  4. When referring to grading under the geometric product, the literature generally only focuses on a [math]\displaystyle{ \mathrm{Z}_2 }[/math]-grading, meaning the split into even and odd [math]\displaystyle{ \mathrm{Z} }[/math]-grades. [math]\displaystyle{ \mathrm{Z}_2 }[/math] is a subgroup of the full [math]\displaystyle{ \mathrm{Z}_2{}^n }[/math]-grading of the geometric product.
  5. Grade is a synonym for degree of a homogeneous element under the grading as an algebra with the exterior product (a [math]\displaystyle{ \mathrm{Z} }[/math]-grading), and not under the geometric product.[lower-alpha 4]
  6. "reviving and generalizing somewhat a term from hamilton's quaternion calculus which has fallen into disuse" Hestenes defined a [math]\displaystyle{ k }[/math]-versor as a multivector which can be factored into a product of [math]\displaystyle{ k }[/math] vectors.[17]
  7. Only the outermorphisms of linear transformations that respect the quadratic form fit this description; outermorphisms are not in general expressible in terms of the algebraic operations.
  8. [...] the outer product operation and the join relation have essentially the same meaning. The Grassmann–Cayley algebra regards the meet relation as its counterpart and gives a unifying framework in which these two operations have equal footing [...] Grassmann himself defined the meet operation as the dual of the outer product operation, but later mathematicians defined the meet operator independently of the outer product through a process called shuffle, and the meet operation is termed the shuffle product. It is shown that this is an antisymmetric operation that satisfies associativity, defining an algebra in its own right. Thus, the Grassmann–Cayley algebra has two algebraic structures simultaneously: one based on the outer product (or join), the other based on the shuffle product (or meet). Hence, the name "double algebra", and the two are shown to be dual to each other.[23]
  9. This should not be confused with Hestenes's irregular generalization [math]\displaystyle{ \textstyle C \bullet_\text{H} D := \sum_{r\ne0,s\ne0}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{ |s-r| } }[/math], where the distinguishing notation is from Dorst, Fontijne & Mann (2007), p. 590, §B.1, which makes the point that scalar components must be handled separately with this product.
  10. The condition that [math]\displaystyle{ \underline{\mathsf{f}}(1) = 1 }[/math] is usually added to ensure that the zero map is unique.
  11. This definition follows Dorst, Fontijne & Mann (2007) and Perwass (2009) – the left contraction used by Dorst replaces the ("fat dot") inner product that Perwass uses, consistent with Perwass's constraint that grade of [math]\displaystyle{ A }[/math] may not exceed that of [math]\displaystyle{ B }[/math].
  12. Dorst appears to merely assume [math]\displaystyle{ B^{+} }[/math] such that [math]\displaystyle{ B \;\rfloor\; B^{+} = 1 }[/math], whereas Perwass (2009) defines [math]\displaystyle{ B^{+} = B^{\dagger}/(B \;\rfloor\; B^{\dagger}) }[/math], where [math]\displaystyle{ B^{\dagger} }[/math] is the conjugate of [math]\displaystyle{ B }[/math], equivalent to the reverse of [math]\displaystyle{ B }[/math] up to a sign.
  13. That is to say, the projection must be defined as [math]\displaystyle{ \mathcal{P}_{B}(A) = (A \;\rfloor\; B^{+}) \;\rfloor\; B }[/math] and not as [math]\displaystyle{ (A \;\rfloor\; B) \;\rfloor\; B^{+} }[/math], though the two are equivalent for non-null blades [math]\displaystyle{ B }[/math].
  14. This generalization to all [math]\displaystyle{ A }[/math] is apparently not considered by Perwass or Dorst.

Citations

  1. Hestenes 1986, p. 6.
  2. Li 2008, p. 411.
  3. 3.0 3.1 Hestenes 1966.
  4. Hestenes 2003.
  5. 5.0 5.1 Doran 1994.
  6. Lasenby, Lasenby & Doran 2000.
  7. Hildenbrand et al. 2004.
  8. Hestenes & Sobczyk 1984, p. 3–5.
  9. Aragón, Aragón & Rodríguez 1997, p. 101.
  10. 10.0 10.1 Hestenes 2005.
  11. Penrose 2007.
  12. Wheeler, Misner & Thorne 1973, p. 83.
  13. Wilmot 1988a, p. 2338.
  14. Wilmot 1988b, p. 2346.
  15. Chevalley 1991.
  16. Wilmot 2023.
  17. Hestenes & Sobczyk 1984, p. 103.
  18. Dorst, Fontijne & Mann 2007, p. 204.
  19. Dorst, Fontijne & Mann 2007, pp. 177–182.
  20. Lundholm & Svensson 2009, pp. 58 et seq.
  21. Lundholm & Svensson 2009, p. 58.
  22. Francis & Kosowsky 2008.
  23. Kanatani 2015, pp. 112–113.
  24. Dorst & Lasenby 2011, p. 443.
  25. Vaz & da Rocha 2016, §2.8.
  26. Hestenes & Sobczyk 1984, p. 31.
  27. Doran & Lasenby 2003, p. 102.
  28. Dorst & Lasenby 2011, p. vi.
  29. "Electromagnetism using Geometric Algebra versus Components". http://www.av8n.com/physics/maxwell-ga.htm. 
  30. Selig 2005.
  31. Hadfield & Lasenby 2020.
  32. "Projective Geometric Algebra". https://projectivegeometricalgebra.org/. 
  33. Selig 2000.
  34. Lengyel 2016.
  35. Hrdina, Návrat & Vašík 2018.
  36. https://www.nature.com/articles/s41598-022-06895-0
  37. Sokolov, A. (2013-07-16). "Clifford algebra and the projective model of Minkowski (pseudo-Euclidean) spaces". arXiv: Metric Geometry. 
  38. Lasenby, Anthony (2004). "Conformal Models of de Sitter Space, Initial Conditions for Inflation and the CMB". AIP Conference Proceedings (AIP) 736: 53–70. doi:10.1063/1.1835174. Bibcode2004AIPC..736...53L. http://dx.doi.org/10.1063/1.1835174. 
  39. Dorst 2016.
  40. (in en) Geometric Algebra with Applications in Engineering. Geometry and Computing. 4. Berlin, Heidelberg: Springer Berlin Heidelberg. 2009. doi:10.1007/978-3-540-89068-3. ISBN 978-3-540-89067-6. Bibcode2009gaae.book.....P. http://link.springer.com/10.1007/978-3-540-89068-3. 
  41. Hrdina, Jaroslav; Návrat, Aleš; Vašík, Petr (July 2018). "Geometric Algebra for Conics" (in en). Advances in Applied Clifford Algebras 28 (3). doi:10.1007/s00006-018-0879-2. ISSN 0188-7009. http://link.springer.com/10.1007/s00006-018-0879-2. 
  42. Breuils, Stéphane; Fuchs, Laurent; Hitzer, Eckhard; Nozick, Vincent; Sugimoto, Akihiro (July 2019). "Three-dimensional quadrics in extended conformal geometric algebras of higher dimensions from control points, implicit equations and axis alignment" (in en). Advances in Applied Clifford Algebras 29 (3). doi:10.1007/s00006-019-0974-z. ISSN 0188-7009. http://link.springer.com/10.1007/s00006-019-0974-z. 
  43. Easter, Robert Benjamin; Hitzer, Eckhard (September 2017). "Double Conformal Geometric Algebra" (in en). Advances in Applied Clifford Algebras 27 (3): 2175–2199. doi:10.1007/s00006-017-0784-0. ISSN 0188-7009. http://link.springer.com/10.1007/s00006-017-0784-0. 
  44. 44.0 44.1 Dorst, Fontijne & Mann 2007, §3.6 p. 85.
  45. Perwass 2009, §3.2.10.2 p. 83.
  46. Hestenes & Sobczyk 1984.
  47. Grassmann 1844.
  48. Artin 1988.

References and further reading

Arranged chronologically

External links

English translations of early books and papers

Research groups




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Geometric_algebra
13 views | Status: cached on July 16 2024 15:41:17
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF