In mathematics, a geometric algebra (also known as a real Clifford algebra) is an extension of elementary algebra to work with geometrical objects such as vectors. Geometric algebra is built out of two fundamental operations, addition and the geometric product. Multiplication of vectors results in higher-dimensional objects called multivectors. Compared to other formalisms for manipulating geometric objects, geometric algebra is noteworthy for supporting vector division and addition of objects of different dimensions.
The geometric product was first briefly mentioned by Hermann Grassmann,[1] who was chiefly interested in developing the closely related exterior algebra. In 1878, William Kingdon Clifford greatly expanded on Grassmann's work to form what are now usually called Clifford algebras in his honor (although Clifford himself chose to call them "geometric algebras"). Clifford defined the Clifford algebra and its product as a unification of the Grassmann algebra and Hamilton's quaternion algebra. Adding the dual of the Grassmann exterior product (the "meet") allows the use of the Grassmann–Cayley algebra, and a conformal version of the latter together with a conformal Clifford algebra yields a conformal geometric algebra (CGA) providing a framework for classical geometries.[2] In practice, these and several derived operations allow a correspondence of elements, subspaces and operations of the algebra with geometric interpretations. For several decades, geometric algebras went somewhat ignored, greatly eclipsed by the vector calculus then newly developed to describe electromagnetism. The term "geometric algebra" was repopularized in the 1960s by Hestenes, who advocated its importance to relativistic physics.[3]
The scalars and vectors have their usual interpretation, and make up distinct subspaces of a geometric algebra. Bivectors provide a more natural representation of the pseudovector quantities of vector calculus, such as oriented area, oriented angle of rotation, torque, angular momentum and the electromagnetic field. A trivector can represent an oriented volume, and so on. An element called a blade may be used to represent a subspace of [math]\displaystyle{ V }[/math] and orthogonal projections onto that subspace. Rotations and reflections are represented as elements. Unlike a vector algebra, a geometric algebra naturally accommodates any number of dimensions and any quadratic form such as in relativity.
Examples of geometric algebras applied in physics include the spacetime algebra (and the less common algebra of physical space) and the conformal geometric algebra. Geometric calculus, an extension of GA that incorporates differentiation and integration, can be used to formulate other theories such as complex analysis and differential geometry, e.g. by using the Clifford algebra instead of differential forms. Geometric algebra has been advocated, most notably by David Hestenes[4] and Chris Doran,[5] as the preferred mathematical framework for physics. Proponents claim that it provides compact and intuitive descriptions in many areas including classical and quantum mechanics, electromagnetic theory and relativity.[6] GA has also found use as a computational tool in computer graphics[7] and robotics.
There are a number of different ways to define a geometric algebra. Hestenes's original approach was axiomatic,[8] "full of geometric significance" and equivalent to the universal Clifford algebra.[9] Given a finite-dimensional vector space [math]\displaystyle{ V }[/math] over a field [math]\displaystyle{ F }[/math] with a symmetric bilinear form (the inner product, e.g. the Euclidean or Lorentzian metric) [math]\displaystyle{ g : V \times V \to F }[/math], the geometric algebra of the quadratic space [math]\displaystyle{ (V, g) }[/math] is the Clifford algebra [math]\displaystyle{ \operatorname{Cl}(V, g) }[/math] which members are called multors or here multivectors. (The term multivector is often used more specificly for elements of exterior algebra.) As usual in this domain, for the remainder of this article, only the real case, [math]\displaystyle{ F = \R }[/math], will be considered. The notation [math]\displaystyle{ \mathcal G(p,q) }[/math] (respectively [math]\displaystyle{ \mathcal G(p,q,r) }[/math]) will be used to denote a geometric algebra for which the bilinear form [math]\displaystyle{ g }[/math] has the signature [math]\displaystyle{ (p,q) }[/math] (respectively [math]\displaystyle{ (p,q,r) }[/math]).
The essential product in the algebra is called the geometric product, and the product in the contained exterior algebra is called the exterior product (frequently called the wedge product and less often the outer product[lower-alpha 1]). It is standard to denote these respectively by juxtaposition (i.e., suppressing any explicit multiplication symbol) and the symbol [math]\displaystyle{ \wedge }[/math]. The above definition of the geometric algebra is abstract, so we summarize the properties of the geometric product by the following set of axioms. The geometric product has the following properties, for multors [math]\displaystyle{ A, B, C\in \mathcal{G}(p,q) }[/math]:
The exterior product has the same properties, except that the last property above is replaced by [math]\displaystyle{ a \wedge a = 0 }[/math] for [math]\displaystyle{ a \in V }[/math].
Note that in the last property above, the real number [math]\displaystyle{ g(a,a) }[/math] need not be nonnegative if [math]\displaystyle{ g }[/math] is not positive-definite. An important property of the geometric product is the existence of elements having a multiplicative inverse. For a vector [math]\displaystyle{ a }[/math], if [math]\displaystyle{ a^2 \ne 0 }[/math] then [math]\displaystyle{ a^{-1} }[/math] exists and is equal to [math]\displaystyle{ g(a,a)^{-1}a }[/math]. A nonzero element of the algebra does not necessarily have a multiplicative inverse. For example, if [math]\displaystyle{ u }[/math] is a vector in [math]\displaystyle{ V }[/math] such that [math]\displaystyle{ u^2 = 1 }[/math], the element [math]\displaystyle{ \textstyle\frac{1}{2}(1 + u) }[/math] is both a nontrivial idempotent element and a nonzero zero divisor, and thus has no inverse.[lower-alpha 2]
It is usual to identify [math]\displaystyle{ \R }[/math] and [math]\displaystyle{ V }[/math] with their images under the natural embeddings [math]\displaystyle{ \R \to \mathcal{G}(p,q) }[/math] and [math]\displaystyle{ V \to \mathcal{G}(p,q) }[/math]. In this article, this identification is assumed. Throughout, the terms scalar and vector refer to elements of [math]\displaystyle{ \R }[/math] and [math]\displaystyle{ V }[/math] respectively (and of their images under this embedding).
For vectors [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math], we may write the geometric product of any two vectors [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] as the sum of a symmetric product and an antisymmetric product:
Thus we can define the inner product[lower-alpha 3] of vectors as
so that the symmetric product can be written as
Conversely, [math]\displaystyle{ g }[/math] is completely determined by the algebra. The antisymmetric part is the exterior product of the two vectors, the product of the contained exterior algebra:
Then by simple addition:
The inner and exterior products are associated with familiar concepts from standard vector algebra. Geometrically, [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] are parallel if their geometric product is equal to their inner product, whereas [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] are perpendicular if their geometric product is equal to their exterior product. In a geometric algebra for which the square of any nonzero vector is positive, the inner product of two vectors can be identified with the dot product of standard vector algebra. The exterior product of two vectors can be identified with the signed area enclosed by a parallelogram the sides of which are the vectors. The cross product of two vectors in [math]\displaystyle{ 3 }[/math] dimensions with positive-definite quadratic form is closely related to their exterior product.
Most instances of geometric algebras of interest have a nondegenerate quadratic form. If the quadratic form is fully degenerate, the inner product of any two vectors is always zero, and the geometric algebra is then simply an exterior algebra. Unless otherwise stated, this article will treat only nondegenerate geometric algebras.
The exterior product is naturally extended as an associative bilinear binary operator between any two elements of the algebra, satisfying the identities
where the sum is over all permutations of the indices, with [math]\displaystyle{ \operatorname{sgn}(\sigma) }[/math] the sign of the permutation, and [math]\displaystyle{ a_i }[/math] are vectors (not general elements of the algebra). Since every element of the algebra can be expressed as the sum of products of this form, this defines the exterior product for every pair of elements of the algebra. It follows from the definition that the exterior product forms an alternating algebra.
The equivalent structure equation for Clifford algebra is [13][14]
where [math]\displaystyle{ Pf(A) }[/math] is the Pfaffian of A and [math]\displaystyle{ \mathcal{C} = \binom{n}{2i} }[/math] provides combinations, [math]\displaystyle{ \mu }[/math], of n indicies divided into 2i and n-2i parts and k is the parity of the combination.
The Pfaffian provides a metric for the exterior algebra and, as pointed out by Claude Chevalley, Clifford algebra reduces to the exterior algebra with a zero quadratic form.[15] The role the Pfaffian plays can be understood from a geometric viewpoint by developing Clifford algebra from simplices.[16] This derivation provides a better connection between Pascal's triangle and simplices because it provides an interpretation of the first column of ones.
A multivector that is the exterior product of [math]\displaystyle{ r }[/math] linearly independent vectors is called a blade, and is said to be of grade [math]\displaystyle{ r }[/math].[lower-alpha 5] A multivector that is the sum of blades of grade [math]\displaystyle{ r }[/math] is called a (homogeneous) multivector of grade [math]\displaystyle{ r }[/math]. From the axioms, with closure, every multivector of the geometric algebra is a sum of blades.
Consider a set of [math]\displaystyle{ r }[/math] linearly independent vectors [math]\displaystyle{ \{a_1,\ldots,a_r\} }[/math] spanning an [math]\displaystyle{ r }[/math]-dimensional subspace of the vector space. With these, we can define a real symmetric matrix (in the same way as a Gramian matrix)
By the spectral theorem, [math]\displaystyle{ \mathbf{A} }[/math] can be diagonalized to diagonal matrix [math]\displaystyle{ \mathbf{D} }[/math] by an orthogonal matrix [math]\displaystyle{ \mathbf{O} }[/math] via
Define a new set of vectors [math]\displaystyle{ \{e_1, \ldots,e_r\} }[/math], known as orthogonal basis vectors, to be those transformed by the orthogonal matrix:
Since orthogonal transformations preserve inner products, it follows that [math]\displaystyle{ e_i\cdot e_j=[\mathbf{D}]_{ij} }[/math] and thus the [math]\displaystyle{ \{e_1, \ldots, e_r\} }[/math] are perpendicular. In other words, the geometric product of two distinct vectors [math]\displaystyle{ e_i \ne e_j }[/math] is completely specified by their exterior product, or more generally
Therefore, every blade of grade [math]\displaystyle{ r }[/math] can be written as the exterior product of [math]\displaystyle{ r }[/math] vectors. More generally, if a degenerate geometric algebra is allowed, then the orthogonal matrix is replaced by a block matrix that is orthogonal in the nondegenerate block, and the diagonal matrix has zero-valued entries along the degenerate dimensions. If the new vectors of the nondegenerate subspace are normalized according to
then these normalized vectors must square to [math]\displaystyle{ +1 }[/math] or [math]\displaystyle{ -1 }[/math]. By Sylvester's law of inertia, the total number of [math]\displaystyle{ +1 }[/math]s and the total number of [math]\displaystyle{ -1 }[/math]s along the diagonal matrix is invariant. By extension, the total number [math]\displaystyle{ p }[/math] of these vectors that square to [math]\displaystyle{ +1 }[/math] and the total number [math]\displaystyle{ q }[/math] that square to [math]\displaystyle{ -1 }[/math] is invariant. (The total number of basis vectors that square to zero is also invariant, and may be nonzero if the degenerate case is allowed.) We denote this algebra [math]\displaystyle{ \mathcal{G}(p,q) }[/math]. For example, [math]\displaystyle{ \mathcal G(3,0) }[/math] models three-dimensional Euclidean space, [math]\displaystyle{ \mathcal G(1,3) }[/math] relativistic spacetime and [math]\displaystyle{ \mathcal G(4,1) }[/math] a conformal geometric algebra of a three-dimensional space.
The set of all possible products of [math]\displaystyle{ n }[/math] orthogonal basis vectors with indices in increasing order, including [math]\displaystyle{ 1 }[/math] as the empty product, forms a basis for the entire geometric algebra (an analogue of the PBW theorem). For example, the following is a basis for the geometric algebra [math]\displaystyle{ \mathcal{G}(3,0) }[/math]:
A basis formed this way is called a canonical basis for the geometric algebra, and any other orthogonal basis for [math]\displaystyle{ V }[/math] will produce another canonical basis. Each canonical basis consists of [math]\displaystyle{ 2^n }[/math] elements. Every multivector of the geometric algebra can be expressed as a linear combination of the canonical basis elements. If the canonical basis elements are [math]\displaystyle{ \{ B_i \mid i \in S \} }[/math] with [math]\displaystyle{ S }[/math] being an index set, then the geometric product of any two multivectors is
The terminology "[math]\displaystyle{ k }[/math]-vector" is often encountered to describe multivectors containing elements of only one grade. In higher dimensional space, some such multivectors are not blades (cannot be factored into the exterior product of [math]\displaystyle{ k }[/math] vectors). By way of example, [math]\displaystyle{ e_1 \wedge e_2 + e_3 \wedge e_4 }[/math] in [math]\displaystyle{ \mathcal{G}(4,0) }[/math] cannot be factored; typically, however, such elements of the algebra do not yield to geometric interpretation as objects, although they may represent geometric quantities such as rotations. Only [math]\displaystyle{ 0 }[/math]-, [math]\displaystyle{ 1 }[/math]-, [math]\displaystyle{ (n-1) }[/math]- and [math]\displaystyle{ n }[/math]-vectors are always blades in [math]\displaystyle{ n }[/math]-space.
A [math]\displaystyle{ k }[/math]-versor is a multivector that can be expressed as the geometric product of [math]\displaystyle{ k }[/math] invertible vectors.[lower-alpha 6][18] Unit quaternions (originally called versors by Hamilton) may be identified with rotors in 3D space in much the same way as real 2D rotors subsume complex numbers; for the details refer to Dorst.[19]
Some authors use the term "versor product" to refer to the frequently occurring case where an operand is "sandwiched" between operators. The descriptions for rotations and reflections, including their outermorphisms, are examples of such sandwiching. These outermorphisms have a particularly simple algebraic form.[lower-alpha 7] Specifically, a mapping of vectors of the form
Since both operators and operand are versors there is potential for alternative examples such as rotating a rotor or reflecting a spinor always provided that some geometrical or physical significance can be attached to such operations.
By the Cartan–Dieudonné theorem we have that every isometry can be given as reflections in hyperplanes and since composed reflections provide rotations then we have that orthogonal transformations are versors.
In group terms, for a real, non-degenerate [math]\displaystyle{ \mathcal G(p,q) }[/math], having identified the group [math]\displaystyle{ \mathcal G^\times }[/math] as the group of all invertible elements of [math]\displaystyle{ \mathcal G }[/math], Lundholm gives a proof that the "versor group" [math]\displaystyle{ \{ v_1 v_2 \cdots v_k \in G : v_i \in V^\times\} }[/math] (the set of invertible versors) is equal to the Lipschitz group [math]\displaystyle{ \Gamma }[/math] (a.k.a. Clifford group, although Lundholm deprecates this usage).[20]
Lundholm defines the [math]\displaystyle{ \operatorname{Pin} }[/math], [math]\displaystyle{ \operatorname{Spin} }[/math], and [math]\displaystyle{ \operatorname{Spin}^+ }[/math] subgroups, generated by unit vectors, and in the case of [math]\displaystyle{ \operatorname{Spin} }[/math] and [math]\displaystyle{ \operatorname{Spin}^+ }[/math], only an even number of such vector factors can be present.[21]
Subgroup | Definition | GA term |
---|---|---|
[math]\displaystyle{ \Gamma }[/math] | [math]\displaystyle{ \Gamma }[/math] | versors |
[math]\displaystyle{ \operatorname{Pin} }[/math] | [math]\displaystyle{ X \in \Gamma : X\tilde X = \pm 1 }[/math] | unit versors |
[math]\displaystyle{ \operatorname{Spin} }[/math] | [math]\displaystyle{ {\operatorname{Pin}} \cap \mathcal{G}^+ }[/math] | even unit versors |
[math]\displaystyle{ \operatorname{Spin}^{+} }[/math] | [math]\displaystyle{ X \in \operatorname{Spin} : X\tilde X = 1 }[/math] | rotors |
Spinors are defined as elements of the even subalgebra of a real GA with spinor norm [math]\displaystyle{ 1 }[/math]. Multiple analyses of spinors use GA as a representation.[22]
Using an orthogonal basis, a graded vector space structure can be established. Elements of the geometric algebra that are scalar multiples of [math]\displaystyle{ 1 }[/math] are grade-[math]\displaystyle{ 0 }[/math] blades and are called scalars. Multivectors that are in the span of [math]\displaystyle{ \{e_1,\ldots,e_n\} }[/math] are grade-[math]\displaystyle{ 1 }[/math] blades and are the ordinary vectors. Multivectors in the span of [math]\displaystyle{ \{e_ie_j\mid 1\leq i\lt j\leq n\} }[/math] are grade-[math]\displaystyle{ 2 }[/math] blades and are the bivectors. This terminology continues through to the last grade of [math]\displaystyle{ n }[/math]-vectors. Alternatively, grade-[math]\displaystyle{ n }[/math] blades are called pseudoscalars, grade-[math]\displaystyle{ (n-1) }[/math] blades pseudovectors, etc. Many of the elements of the algebra are not graded by this scheme since they are sums of elements of differing grade. Such elements are said to be of mixed grade. The grading of multivectors is independent of the basis chosen originally.
This is a grading as a vector space, but not as an algebra. Because the product of an [math]\displaystyle{ r }[/math]-blade and an [math]\displaystyle{ s }[/math]-blade is contained in the span of [math]\displaystyle{ 0 }[/math] through [math]\displaystyle{ r+s }[/math]-blades, the geometric algebra is a filtered algebra.
A multivector [math]\displaystyle{ A }[/math] may be decomposed with the grade-projection operator [math]\displaystyle{ \langle A \rangle _r }[/math], which outputs the grade-[math]\displaystyle{ r }[/math] portion of [math]\displaystyle{ A }[/math]. As a result:
As an example, the geometric product of two vectors [math]\displaystyle{ a b = a \cdot b + a \wedge b = \langle a b \rangle_0 + \langle a b \rangle_2 }[/math] since [math]\displaystyle{ \langle a b \rangle_0=a\cdot b }[/math] and [math]\displaystyle{ \langle a b \rangle_2 = a\wedge b }[/math] and [math]\displaystyle{ \langle a b \rangle_i=0 }[/math], for [math]\displaystyle{ i }[/math] other than [math]\displaystyle{ 0 }[/math] and [math]\displaystyle{ 2 }[/math].
The decomposition of a multivector [math]\displaystyle{ A }[/math] may also be split into those components that are even and those that are odd:
This is the result of forgetting structure from a [math]\displaystyle{ \mathrm{Z} }[/math]-graded vector space to [math]\displaystyle{ \mathrm{Z}_2 }[/math]-graded vector space. The geometric product respects this coarser grading. Thus in addition to being a [math]\displaystyle{ \mathrm{Z}_2 }[/math]-graded vector space, the geometric algebra is a [math]\displaystyle{ \mathrm{Z}_2 }[/math]-graded algebra or superalgebra.
Restricting to the even part, the product of two even elements is also even. This means that the even multivectors defines an even subalgebra. The even subalgebra of an [math]\displaystyle{ n }[/math]-dimensional geometric algebra is isomorphic (without preserving either filtration or grading) to a full geometric algebra of [math]\displaystyle{ (n-1) }[/math] dimensions. Examples include [math]\displaystyle{ \mathcal G^{+}(2,0) \cong \mathcal{G}(0,1) }[/math] and [math]\displaystyle{ \mathcal{G}^{+}(1,3) \cong \mathcal G(3,0) }[/math].
Geometric algebra represents subspaces of [math]\displaystyle{ V }[/math] as blades, and so they coexist in the same algebra with vectors from [math]\displaystyle{ V }[/math]. A [math]\displaystyle{ k }[/math]-dimensional subspace [math]\displaystyle{ W }[/math] of [math]\displaystyle{ V }[/math] is represented by taking an orthogonal basis [math]\displaystyle{ \{b_1,b_2,\ldots, b_k\} }[/math] and using the geometric product to form the blade [math]\displaystyle{ D = b_1b_2\cdots b_k }[/math]. There are multiple blades representing [math]\displaystyle{ W }[/math]; all those representing [math]\displaystyle{ W }[/math] are scalar multiples of [math]\displaystyle{ D }[/math]. These blades can be separated into two sets: positive multiples of [math]\displaystyle{ D }[/math] and negative multiples of [math]\displaystyle{ D }[/math]. The positive multiples of [math]\displaystyle{ D }[/math] are said to have the same orientation as [math]\displaystyle{ D }[/math], and the negative multiples the opposite orientation.
Blades are important since geometric operations such as projections, rotations and reflections depend on the factorability via the exterior product that (the restricted class of) [math]\displaystyle{ n }[/math]-blades provide but that (the generalized class of) grade-[math]\displaystyle{ n }[/math] multivectors do not when [math]\displaystyle{ n \ge 4 }[/math].
Unit pseudoscalars are blades that play important roles in GA. A unit pseudoscalar for a non-degenerate subspace [math]\displaystyle{ W }[/math] of [math]\displaystyle{ V }[/math] is a blade that is the product of the members of an orthonormal basis for [math]\displaystyle{ W }[/math]. It can be shown that if [math]\displaystyle{ I }[/math] and [math]\displaystyle{ I' }[/math] are both unit pseudoscalars for [math]\displaystyle{ W }[/math], then [math]\displaystyle{ I = \pm I' }[/math] and [math]\displaystyle{ I^2 = \pm 1 }[/math]. If one doesn't choose an orthonormal basis for [math]\displaystyle{ W }[/math], then the Plücker embedding gives a vector in the exterior algebra but only up to scaling. Using the vector space isomorphism between the geometric algebra and exterior algebra, this gives the equivalence class of [math]\displaystyle{ \alpha I }[/math] for all [math]\displaystyle{ \alpha \neq 0 }[/math]. Orthonormality gets rid of this ambiguity except for the signs above.
Suppose the geometric algebra [math]\displaystyle{ \mathcal{G}(n,0) }[/math] with the familiar positive definite inner product on [math]\displaystyle{ \R^n }[/math] is formed. Given a plane (two-dimensional subspace) of [math]\displaystyle{ \R^n }[/math], one can find an orthonormal basis [math]\displaystyle{ \{ b_1, b_2 \} }[/math] spanning the plane, and thus find a unit pseudoscalar [math]\displaystyle{ I = b_1 b_2 }[/math] representing this plane. The geometric product of any two vectors in the span of [math]\displaystyle{ b_1 }[/math] and [math]\displaystyle{ b_2 }[/math] lies in [math]\displaystyle{ \{ \alpha_0 + \alpha_1 I \mid \alpha_i \in \R \} }[/math], that is, it is the sum of a [math]\displaystyle{ 0 }[/math]-vector and a [math]\displaystyle{ 2 }[/math]-vector.
By the properties of the geometric product, [math]\displaystyle{ I^2 = b_1 b_2 b_1 b_2 = -b_1 b_2 b_2 b_1 = -1 }[/math]. The resemblance to the imaginary unit is not incidental: the subspace [math]\displaystyle{ \{ \alpha_0 + \alpha_1 I \mid \alpha_i \in \R \} }[/math] is [math]\displaystyle{ \R }[/math]-algebra isomorphic to the complex numbers. In this way, a copy of the complex numbers is embedded in the geometric algebra for each two-dimensional subspace of [math]\displaystyle{ V }[/math] on which the quadratic form is definite.
It is sometimes possible to identify the presence of an imaginary unit in a physical equation. Such units arise from one of the many quantities in the real algebra that square to [math]\displaystyle{ -1 }[/math], and these have geometric significance because of the properties of the algebra and the interaction of its various subspaces.
In [math]\displaystyle{ \mathcal{G}(3,0) }[/math], a further familiar case occurs. Given a canonical basis consisting of orthonormal vectors [math]\displaystyle{ e_i }[/math] of [math]\displaystyle{ V }[/math], the set of all [math]\displaystyle{ 2 }[/math]-vectors is spanned by
Labelling these [math]\displaystyle{ i }[/math], [math]\displaystyle{ j }[/math] and [math]\displaystyle{ k }[/math] (momentarily deviating from our uppercase convention), the subspace generated by [math]\displaystyle{ 0 }[/math]-vectors and [math]\displaystyle{ 2 }[/math]-vectors is exactly [math]\displaystyle{ \{ \alpha_0 + i \alpha_1 + j \alpha_2 + k \alpha_3 \mid \alpha_i \in \R\} }[/math]. This set is seen to be the even subalgebra of [math]\displaystyle{ \mathcal{G}(3,0) }[/math], and furthermore is isomorphic as an [math]\displaystyle{ \R }[/math]-algebra to the quaternions, another important algebraic system.
It is common practice to extend the exterior product on vectors to the entire algebra. This may be done through the use of the above mentioned grade projection operator:
This generalization is consistent with the above definition involving antisymmetrization. Another generalization related to the exterior product is the commutator product:
The regressive product (usually referred to as the "meet") is the dual of the exterior product (or "join" in this context).[lower-alpha 8] The dual specification of elements permits, for blades [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math], the intersection (or meet) where the duality is to be taken relative to the smallest grade blade containing both [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] (the join).[24]
with [math]\displaystyle{ I }[/math] the unit pseudoscalar of the algebra. The regressive product, like the exterior product, is associative.[25]
The inner product on vectors can also be generalized, but in more than one non-equivalent way. The paper (Dorst 2002) gives a full treatment of several different inner products developed for geometric algebras and their interrelationships, and the notation is taken from there. Many authors use the same symbol as for the inner product of vectors for their chosen extension (e.g. Hestenes and Perwass). No consistent notation has emerged.
Among these several different generalizations of the inner product on vectors are:
(Dorst 2002) makes an argument for the use of contractions in preference to Hestenes's inner product; they are algebraically more regular and have cleaner geometric interpretations. A number of identities incorporating the contractions are valid without restriction of their inputs. For example,
Benefits of using the left contraction as an extension of the inner product on vectors include that the identity [math]\displaystyle{ ab = a \cdot b + a \wedge b }[/math] is extended to [math]\displaystyle{ aB = a \;\rfloor\; B + a \wedge B }[/math] for any vector [math]\displaystyle{ a }[/math] and multivector [math]\displaystyle{ B }[/math], and that the projection operation [math]\displaystyle{ \mathcal{P}_b (a) = (a \cdot b^{-1})b }[/math] is extended to [math]\displaystyle{ \mathcal{P}_B (A) = (A \;\rfloor\; B^{-1}) \;\rfloor\; B }[/math] for any blade [math]\displaystyle{ B }[/math] and any multivector [math]\displaystyle{ A }[/math] (with a minor modification to accommodate null [math]\displaystyle{ B }[/math], given below).
Let [math]\displaystyle{ \{ e_1 , \ldots , e_n \} }[/math] be a basis of [math]\displaystyle{ V }[/math], i.e. a set of [math]\displaystyle{ n }[/math] linearly independent vectors that span the [math]\displaystyle{ n }[/math]-dimensional vector space [math]\displaystyle{ V }[/math]. The basis that is dual to [math]\displaystyle{ \{ e_1 , \ldots , e_n \} }[/math] is the set of elements of the dual vector space [math]\displaystyle{ V^{*} }[/math] that forms a biorthogonal system with this basis, thus being the elements denoted [math]\displaystyle{ \{ e^1 , \ldots , e^n \} }[/math] satisfying
where [math]\displaystyle{ \delta }[/math] is the Kronecker delta.
Given a nondegenerate quadratic form on [math]\displaystyle{ V }[/math], [math]\displaystyle{ V^{*} }[/math] becomes naturally identified with [math]\displaystyle{ V }[/math], and the dual basis may be regarded as elements of [math]\displaystyle{ V }[/math], but are not in general the same set as the original basis.
Given further a GA of [math]\displaystyle{ V }[/math], let
be the pseudoscalar (which does not necessarily square to [math]\displaystyle{ \pm 1 }[/math]) formed from the basis [math]\displaystyle{ \{ e_1 , \ldots , e_n \} }[/math]. The dual basis vectors may be constructed as
where the [math]\displaystyle{ \check{e}_i }[/math] denotes that the [math]\displaystyle{ i }[/math]th basis vector is omitted from the product.
A dual basis is also known as a reciprocal basis or reciprocal frame.
A major usage of a dual basis is to separate vectors into components. Given a vector [math]\displaystyle{ a }[/math], scalar components [math]\displaystyle{ a^i }[/math] can be defined as
in terms of which [math]\displaystyle{ a }[/math] can be separated into vector components as
We can also define scalar components [math]\displaystyle{ a_i }[/math] as
in terms of which [math]\displaystyle{ a }[/math] can be separated into vector components in terms of the dual basis as
A dual basis as defined above for the vector subspace of a geometric algebra can be extended to cover the entire algebra.[26] For compactness, we'll use a single capital letter to represent an ordered set of vector indices. I.e., writing
where [math]\displaystyle{ j_1 \lt j_2 \lt \dots \lt j_n, }[/math] we can write a basis blade as
The corresponding reciprocal blade has the indices in opposite order:
Similar to the case above with vectors, it can be shown that
where [math]\displaystyle{ * }[/math] is the scalar product.
With [math]\displaystyle{ A }[/math] a multivector, we can define scalar components as[27]
in terms of which [math]\displaystyle{ A }[/math] can be separated into component blades as
We can alternatively define scalar components
in terms of which [math]\displaystyle{ A }[/math] can be separated into component blades as
Although a versor is easier to work with because it can be directly represented in the algebra as a multivector, versors are a subgroup of linear functions on multivectors, which can still be used when necessary. The geometric algebra of an [math]\displaystyle{ n }[/math]-dimensional vector space is spanned by a basis of [math]\displaystyle{ 2^n }[/math] elements. If a multivector is represented by a [math]\displaystyle{ 2^n \times 1 }[/math] real column matrix of coefficients of a basis of the algebra, then all linear transformations of the multivector can be expressed as the matrix multiplication by a [math]\displaystyle{ 2^n \times 2^n }[/math] real matrix. However, such a general linear transformation allows arbitrary exchanges among grades, such as a "rotation" of a scalar into a vector, which has no evident geometric interpretation.
A general linear transformation from vectors to vectors is of interest. With the natural restriction to preserving the induced exterior algebra, the outermorphism of the linear transformation is the unique[lower-alpha 10] extension of the versor. If [math]\displaystyle{ f }[/math] is a linear function that maps vectors to vectors, then its outermorphism is the function that obeys the rule
for a blade, extended to the whole algebra through linearity.
Although a lot of attention has been placed on CGA, it is to be noted that GA is not just one algebra, it is one of a family of algebras with the same essential structure.[28]
The even subalgebra of [math]\displaystyle{ \mathcal G(2,0) }[/math] is isomorphic to the complex numbers, as may be seen by writing a vector [math]\displaystyle{ P }[/math] in terms of its components in an orthonormal basis and left multiplying by the basis vector [math]\displaystyle{ e_1 }[/math], yielding
where we identify [math]\displaystyle{ i \mapsto e_1e_2 }[/math] since
Similarly, the even subalgebra of [math]\displaystyle{ \mathcal G(3,0) }[/math] with basis [math]\displaystyle{ \{1, e_2 e_3, e_3 e_1, e_1 e_2 \} }[/math] is isomorphic to the quaternions as may be seen by identifying [math]\displaystyle{ i \mapsto -e_2 e_3 }[/math], [math]\displaystyle{ j \mapsto -e_3 e_1 }[/math] and [math]\displaystyle{ k \mapsto -e_1 e_2 }[/math].
Every associative algebra has a matrix representation; replacing the three Cartesian basis vectors by the Pauli matrices gives a representation of [math]\displaystyle{ \mathcal G(3,0) }[/math]:
Dotting the "Pauli vector" (a dyad):
In physics, the main applications are the geometric algebra of Minkowski 3+1 spacetime, [math]\displaystyle{ \mathcal{G}(1,3) }[/math], called spacetime algebra (STA),[3] or less commonly, [math]\displaystyle{ \mathcal{G}(3,0) }[/math], interpreted the algebra of physical space (APS).
While in STA, points of spacetime are represented simply by vectors, in APS, points of [math]\displaystyle{ (3+1) }[/math]-dimensional spacetime are instead represented by paravectors, a three-dimensional vector (space) plus a one-dimensional scalar (time).
In spacetime algebra the electromagnetic field tensor has a bivector representation [math]\displaystyle{ {F} = ({E} + i c {B})\gamma_0 }[/math].[29] Here, the [math]\displaystyle{ i = \gamma_0 \gamma_1 \gamma_2 \gamma_3 }[/math] is the unit pseudoscalar (or four-dimensional volume element), [math]\displaystyle{ \gamma_0 }[/math] is the unit vector in time direction, and [math]\displaystyle{ E }[/math] and [math]\displaystyle{ B }[/math] are the classic electric and magnetic field vectors (with a zero time component). Using the four-current [math]\displaystyle{ {J} }[/math], Maxwell's equations then become
Formulation | Homogeneous equations | Non-homogeneous equations |
---|---|---|
Fields | [math]\displaystyle{ D F = \mu_0 J }[/math] | |
[math]\displaystyle{ D \wedge F = 0 }[/math] | [math]\displaystyle{ D ~\rfloor~ F = \mu_0 J }[/math] | |
Potentials (any gauge) | [math]\displaystyle{ F = D \wedge A }[/math] | [math]\displaystyle{ D ~\rfloor~ (D \wedge A) = \mu_0 J }[/math] |
Potentials (Lorenz gauge) | [math]\displaystyle{ F = D A }[/math]
[math]\displaystyle{ D ~\rfloor~ A = 0 }[/math] |
[math]\displaystyle{ D^2 A = \mu_0 J }[/math] |
In geometric calculus, juxtaposition of vectors such as in [math]\displaystyle{ DF }[/math] indicate the geometric product and can be decomposed into parts as [math]\displaystyle{ DF = D ~\rfloor~ F + D \wedge F }[/math]. Here [math]\displaystyle{ D }[/math] is the covector derivative in any spacetime and reduces to [math]\displaystyle{ \nabla }[/math] in flat spacetime. Where [math]\displaystyle{ \bigtriangledown }[/math] plays a role in Minkowski [math]\displaystyle{ 4 }[/math]-spacetime which is synonymous to the role of [math]\displaystyle{ \nabla }[/math] in Euclidean [math]\displaystyle{ 3 }[/math]-space and is related to the d'Alembertian by [math]\displaystyle{ \Box=\bigtriangledown^2 }[/math]. Indeed, given an observer represented by a future pointing timelike vector [math]\displaystyle{ \gamma_0 }[/math] we have
Boosts in this Lorentzian metric space have the same expression [math]\displaystyle{ e^{{\beta}} }[/math] as rotation in Euclidean space, where [math]\displaystyle{ {\beta} }[/math] is the bivector generated by the time and the space directions involved, whereas in the Euclidean case it is the bivector generated by the two space directions, strengthening the "analogy" to almost identity.
The Dirac matrices are a representation of [math]\displaystyle{ \mathcal G(1,3) }[/math], showing the equivalence with matrix representations used by physicists.
Homogeneous models generally refer to a projective representation in which the elements of the one-dimensional subspaces of a vector space represent points of a geometry.
In a geometric algebra of a space of [math]\displaystyle{ n }[/math] dimensions, the rotors represent a set of transformations with [math]\displaystyle{ n(n-1)/2 }[/math] degrees of freedom, corresponding to rotations – for example, [math]\displaystyle{ 3 }[/math] when [math]\displaystyle{ n=3 }[/math] and [math]\displaystyle{ 6 }[/math] when [math]\displaystyle{ n=4 }[/math]. Geometric algebra is often used to model a projective space, i.e. as a homogeneous model: a point, line, plane, etc. is represented by an equivalence class of elements of the algebra that differ by an invertible scalar factor.
The rotors in a space of dimension [math]\displaystyle{ n+1 }[/math] have [math]\displaystyle{ n(n-1)/2+n }[/math] degrees of freedom, the same as the number of degrees of freedom in the rotations and translations combined for an [math]\displaystyle{ n }[/math]-dimensional space.
This is the case in Projective Geometric Algebra (PGA), which is used[30][31][32] to represent Euclidean isometries in Euclidean geometry (thereby covering the large majority of engineering applications of geometry). In this model, a degenerate dimension is added to the three Euclidean dimensions to form the algebra [math]\displaystyle{ \mathcal G(3,0,1) }[/math]. With a suitable identification of subspaces to represent points, lines and planes, the versors of this algebra represent all proper Euclidean isometries, which are always screw motions in 3-dimensional space, along with all improper Euclidean isometries, which includes reflections, rotoreflections, transflections, and point reflections.
PGA combines [math]\displaystyle{ \mathcal G(3,0,1) }[/math] with a dual operator to obtain meet, join, distance, and angle formulae. Depending on the author,[33][34] this could mean the Hodge star or the projective dual, though both result in identical equations being derived, albeit with different notation. In effect, the dual switches basis vectors that are present and absent in the expression of each term of the algebraic representation. For example, in the PGA or 3-dimensional space, the dual of the line [math]\displaystyle{ \boldsymbol{e}_{12} }[/math] is the line [math]\displaystyle{ \boldsymbol{e}_{03} }[/math], because [math]\displaystyle{ \boldsymbol{e}_{ 0} }[/math] and [math]\displaystyle{ \boldsymbol{e}_{ 3} }[/math] are basis elements that are not contained in [math]\displaystyle{ \boldsymbol{e}_{12} }[/math] but are contained in [math]\displaystyle{ \boldsymbol{e}_{03} }[/math]. In the PGA of 2-dimensional space, the dual of [math]\displaystyle{ \boldsymbol{e}_{12} }[/math] is [math]\displaystyle{ \boldsymbol{e}_{0} }[/math], since there is no [math]\displaystyle{ \boldsymbol{e}_{3} }[/math] element.
PGA is a widely used system that combines geometric algebra with homogeneous representations in geometry, but there exist several other such systems. The conformal model discussed below is homogeneous, as is "Conic Geometric Algebra",[35] and see Plane-based geometric algebra for discussion of homogeneous models of elliptic and hyperbolic geometry compared with the euclidean geometry derived from PGA.
Working within GA, Euclidean space [math]\displaystyle{ \mathcal E^3 }[/math] (along with a conformal point at infinity) is embedded projectively in the CGA [math]\displaystyle{ \mathcal{G}(4,1) }[/math] via the identification of Euclidean points with 1D subspaces in the 4D null cone of the 5D CGA vector subspace. This allows all conformal transformations to be performed as rotations and reflections and is covariant, extending incidence relations of projective geometry to circles and spheres.
Specifically, we add orthogonal basis vectors [math]\displaystyle{ e_+ }[/math] and [math]\displaystyle{ e_- }[/math] such that [math]\displaystyle{ e_+^2 = +1 }[/math] and [math]\displaystyle{ e_-^2 = -1 }[/math] to the basis of the vector space that generates [math]\displaystyle{ \mathcal{G}(3,0) }[/math] and identify null vectors
This procedure has some similarities to the procedure for working with homogeneous coordinates in projective geometry and in this case allows the modeling of Euclidean transformations of [math]\displaystyle{ \mathbf{R}^3 }[/math] as orthogonal transformations of a subset of [math]\displaystyle{ \mathbf{R}^{4,1} }[/math].
A fast changing and fluid area of GA, CGA is also being investigated for applications to relativistic physics.
Note in this list that [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] can be swapped and the same name applies; for example, with relatively little change occurring, see sign convention. For example, [math]\displaystyle{ \mathcal{G}(3, 1, 0) }[/math] and [math]\displaystyle{ \mathcal{G}(1, 3, 0) }[/math] are both referred to as Spacetime Algebra.[36]
Signature | Names and acronyms | Blades, eg oriented geometric objects that algebra can represent | Rotors, eg orientation-preserving transformations that the algebra can represent | Notes |
---|---|---|---|---|
[math]\displaystyle{ \mathcal{G}(3,0,0) }[/math] | Vectorspace GA, VGA | Planes and lines through the origin | Rotations, eg [math]\displaystyle{ \mathrm{SO} (3) }[/math] | First GA to be discovered[by whom?] |
[math]\displaystyle{ \mathcal{G}(3,0,1) }[/math] | Plane-based GA, Projective GA, PGA | Planes, lines, and points anywhere in space | Rotations and translations, eg rigid motions, [math]\displaystyle{ \mathrm{SE}(3) }[/math] aka [math]\displaystyle{ \mathrm{SO}(3,0,1) }[/math] | Slight modifications to the signature allow for the modelling of hyperbolic and elliptic space, see main article. Cannot model the entire "projective" group. |
[math]\displaystyle{ \mathcal{G}(3,1,0) }[/math] | Spacetime Algebra, STA | Volumes, planes and lines through the origin in spacetime | Rotations and spacetime boosts, e.g. [math]\displaystyle{ \mathrm{SO}(3,1) }[/math], the Lorentz group | Basis for Gauge Theory Gravity. |
[math]\displaystyle{ \mathcal{G}(3,1,1) }[/math] | Spacetime Algebra Projectivized,[37] STAP | Volumes, planes, lines, and points (events) in spacetime | Rotations, translations, and spacetime boosts (Poincare group) | |
[math]\displaystyle{ \mathcal{G}(4,1,0) }[/math] | Conformal GA, CGA | Spheres, circles, point pairs, lines, and planes anywhere in space | Transformations of space that preserve angles (Conformal group [math]\displaystyle{ \mathrm{SO}(4,1) }[/math]) | |
[math]\displaystyle{ \mathcal{G}(4,2,0) }[/math] | Conformal Spacetime Algebra,[38] CSTA | Spheres, circles, planes, lines, light-cones, trajectories of objects with constant acceleration, all in spacetime | Conformal transformations of spacetime, e.g. transformations that preserve rapidity along arclengths through spacetime | Related to Twistor theory. |
[math]\displaystyle{ \mathcal{G}(3,3,0) }[/math] | Mother Algebra[39] | Unknown | Projective group | |
[math]\displaystyle{ \mathcal{G}(5,3,0) }[/math] | GA for Conics, GAC | Points, point pair/triple/quadruple, Conic, Pencil of up to 6 independent conics. | Reflections, translations, rotations, dilations, others | Conics can be created from control points and pencils of conics. |
[math]\displaystyle{ \mathcal{G}(9,6,0) }[/math] | Quadric Conformal GA, QCGA[42] | Points, tuples of up to 8 points, quadric surfaces, conics, conics on quadratic surfaces (such as Spherical conic), pencils of up to 9 quadric surfaces. | Reflections, translations, rotations, dilations, others | Quadric surfaces can be created from control points and their surface normals can be determined. |
[math]\displaystyle{ \mathcal{G}(8,2,0) }[/math] | Double Conformal Geometric Algebra (DCGA)[43] | Points, Darboux Cyclides, quadrics surfaces | Reflections, translations, rotations, dilations, others | Uses bivectors of two independent CGA basis to represents 5x5 symmetric "matrices" of 15 unique coefficients. This is at the cost of the ability to perform intersections and construction by points. |
For any vector [math]\displaystyle{ a }[/math] and any invertible vector [math]\displaystyle{ m }[/math],
where the projection of [math]\displaystyle{ a }[/math] onto [math]\displaystyle{ m }[/math] (or the parallel part) is
and the rejection of [math]\displaystyle{ a }[/math] from [math]\displaystyle{ m }[/math] (or the orthogonal part) is
Using the concept of a [math]\displaystyle{ k }[/math]-blade [math]\displaystyle{ B }[/math] as representing a subspace of [math]\displaystyle{ V }[/math] and every multivector ultimately being expressed in terms of vectors, this generalizes to projection of a general multivector onto any invertible [math]\displaystyle{ k }[/math]-blade [math]\displaystyle{ B }[/math] as[lower-alpha 11]
with the rejection being defined as
The projection and rejection generalize to null blades [math]\displaystyle{ B }[/math] by replacing the inverse [math]\displaystyle{ B^{-1} }[/math] with the pseudoinverse [math]\displaystyle{ B^{+} }[/math] with respect to the contractive product.[lower-alpha 12] The outcome of the projection coincides in both cases for non-null blades.[44][45] For null blades [math]\displaystyle{ B }[/math], the definition of the projection given here with the first contraction rather than the second being onto the pseudoinverse should be used,[lower-alpha 13] as only then is the result necessarily in the subspace represented by [math]\displaystyle{ B }[/math].[44] The projection generalizes through linearity to general multivectors [math]\displaystyle{ A }[/math].[lower-alpha 14] The projection is not linear in [math]\displaystyle{ B }[/math] and does not generalize to objects [math]\displaystyle{ B }[/math] that are not blades.
Simple reflections in a hyperplane are readily expressed in the algebra through conjugation with a single vector. These serve to generate the group of general rotoreflections and rotations.
The reflection [math]\displaystyle{ c' }[/math] of a vector [math]\displaystyle{ c }[/math] along a vector [math]\displaystyle{ m }[/math], or equivalently in the hyperplane orthogonal to [math]\displaystyle{ m }[/math], is the same as negating the component of a vector parallel to [math]\displaystyle{ m }[/math]. The result of the reflection will be
This is not the most general operation that may be regarded as a reflection when the dimension [math]\displaystyle{ n \ge 4 }[/math]. A general reflection may be expressed as the composite of any odd number of single-axis reflections. Thus, a general reflection [math]\displaystyle{ a' }[/math] of a vector [math]\displaystyle{ a }[/math] may be written
where
If we define the reflection along a non-null vector [math]\displaystyle{ m }[/math] of the product of vectors as the reflection of every vector in the product along the same vector, we get for any product of an odd number of vectors that, by way of example,
and for the product of an even number of vectors that
Using the concept of every multivector ultimately being expressed in terms of vectors, the reflection of a general multivector [math]\displaystyle{ A }[/math] using any reflection versor [math]\displaystyle{ M }[/math] may be written
where [math]\displaystyle{ \alpha }[/math] is the automorphism of reflection through the origin of the vector space ([math]\displaystyle{ v \mapsto -v }[/math]) extended through linearity to the whole algebra.
If we have a product of vectors [math]\displaystyle{ R = a_1a_2 \cdots a_r }[/math] then we denote the reverse as
As an example, assume that [math]\displaystyle{ R = ab }[/math] we get
Scaling [math]\displaystyle{ R }[/math] so that [math]\displaystyle{ R\tilde R = 1 }[/math] then
so [math]\displaystyle{ Rv\tilde R }[/math] leaves the length of [math]\displaystyle{ v }[/math] unchanged. We can also show that
so the transformation [math]\displaystyle{ Rv\tilde R }[/math] preserves both length and angle. It therefore can be identified as a rotation or rotoreflection; [math]\displaystyle{ R }[/math] is called a rotor if it is a proper rotation (as it is if it can be expressed as a product of an even number of vectors) and is an instance of what is known in GA as a versor.
There is a general method for rotating a vector involving the formation of a multivector of the form [math]\displaystyle{ R = e^{-B \theta / 2} }[/math] that produces a rotation [math]\displaystyle{ \theta }[/math] in the plane and with the orientation defined by a [math]\displaystyle{ 2 }[/math]-blade [math]\displaystyle{ B }[/math].
Rotors are a generalization of quaternions to [math]\displaystyle{ n }[/math]-dimensional spaces.
For vectors [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] spanning a parallelogram we have
with the result that [math]\displaystyle{ a \wedge b }[/math] is linear in the product of the "altitude" and the "base" of the parallelogram, that is, its area.
Similar interpretations are true for any number of vectors spanning an [math]\displaystyle{ n }[/math]-dimensional parallelotope; the exterior product of vectors [math]\displaystyle{ a_1, a_2, \ldots , a_n }[/math], that is [math]\displaystyle{ \textstyle \bigwedge_{i=1}^n a_i }[/math], has a magnitude equal to the volume of the [math]\displaystyle{ n }[/math]-parallelotope. An [math]\displaystyle{ n }[/math]-vector does not necessarily have a shape of a parallelotope – this is a convenient visualization. It could be any shape, although the volume equals that of the parallelotope.
We may define the line parametrically by [math]\displaystyle{ p = t + \alpha \ v }[/math] where [math]\displaystyle{ p }[/math] and [math]\displaystyle{ t }[/math] are position vectors for points P and T and [math]\displaystyle{ v }[/math] is the direction vector for the line.
Then
so
and
The mathematical description of rotational forces such as torque and angular momentum often makes use of the cross product of vector calculus in three dimensions with a convention of orientation (which defines handedness).
The cross product can be viewed in terms of the exterior product allowing a more natural geometric interpretation of the cross product as a bivector using the dual relationship
For example, torque is generally defined as the magnitude of the perpendicular force component times distance, or work per unit angle.
Suppose a circular path in an arbitrary plane containing orthonormal vectors [math]\displaystyle{ \hat{u} }[/math] and [math]\displaystyle{ \hat{v} }[/math] is parameterized by angle.
By designating the unit bivector of this plane as the imaginary number
this path vector can be conveniently written in complex exponential form
and the derivative with respect to angle is
So the torque, the rate of change of work [math]\displaystyle{ W }[/math], due to a force [math]\displaystyle{ F }[/math], is
Unlike the cross product description of torque, [math]\displaystyle{ \tau = \mathbf{r} \times F }[/math], the geometric algebra description does not introduce a vector in the normal direction; a vector that does not exist in two and that is not unique in greater than three dimensions. The unit bivector describes the plane and the orientation of the rotation, and the sense of the rotation is relative to the angle between the vectors [math]\displaystyle{ {\hat{u}} }[/math] and [math]\displaystyle{ {\hat{v} } }[/math].
Geometric calculus extends the formalism to include differentiation and integration including differential geometry and differential forms.[46]
Essentially, the vector derivative is defined so that the GA version of Green's theorem is true,
and then one can write
as a geometric product, effectively generalizing Stokes' theorem (including the differential form version of it).
In [math]\displaystyle{ 1D }[/math] when [math]\displaystyle{ A }[/math] is a curve with endpoints [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math], then
reduces to
or the fundamental theorem of integral calculus.
Also developed are the concept of vector manifold and geometric integration theory (which generalizes differential forms).
Although the connection of geometry with algebra dates as far back at least to Euclid's Elements in the third century B.C. (see Greek geometric algebra), GA in the sense used in this article was not developed until 1844, when it was used in a systematic way to describe the geometrical properties and transformations of a space. In that year, Hermann Grassmann introduced the idea of a geometrical algebra in full generality as a certain calculus (analogous to the propositional calculus) that encoded all of the geometrical information of a space.[47] Grassmann's algebraic system could be applied to a number of different kinds of spaces, the chief among them being Euclidean space, affine space, and projective space. Following Grassmann, in 1878 William Kingdon Clifford examined Grassmann's algebraic system alongside the quaternions of William Rowan Hamilton in (Clifford 1878). From his point of view, the quaternions described certain transformations (which he called rotors), whereas Grassmann's algebra described certain properties (or Strecken such as length, area, and volume). His contribution was to define a new product — the geometric product – on an existing Grassmann algebra, which realized the quaternions as living within that algebra. Subsequently, Rudolf Lipschitz in 1886 generalized Clifford's interpretation of the quaternions and applied them to the geometry of rotations in [math]\displaystyle{ n }[/math] dimensions. Later these developments would lead other 20th-century mathematicians to formalize and explore the properties of the Clifford algebra.
Nevertheless, another revolutionary development of the 19th-century would completely overshadow the geometric algebras: that of vector analysis, developed independently by Josiah Willard Gibbs and Oliver Heaviside. Vector analysis was motivated by James Clerk Maxwell's studies of electromagnetism, and specifically the need to express and manipulate conveniently certain differential equations. Vector analysis had a certain intuitive appeal compared to the rigors of the new algebras. Physicists and mathematicians alike readily adopted it as their geometrical toolkit of choice, particularly following the influential 1901 textbook Vector Analysis by Edwin Bidwell Wilson, following lectures of Gibbs.
In more detail, there have been three approaches to geometric algebra: quaternionic analysis, initiated by Hamilton in 1843 and geometrized as rotors by Clifford in 1878; geometric algebra, initiated by Grassmann in 1844; and vector analysis, developed out of quaternionic analysis in the late 19th century by Gibbs and Heaviside. The legacy of quaternionic analysis in vector analysis can be seen in the use of [math]\displaystyle{ i }[/math], [math]\displaystyle{ j }[/math], [math]\displaystyle{ k }[/math] to indicate the basis vectors of [math]\displaystyle{ \mathbf{R}^3 }[/math]: it is being thought of as the purely imaginary quaternions. From the perspective of geometric algebra, the even subalgebra of the Space Time Algebra is isomorphic to the GA of 3D Euclidean space and quaternions are isomorphic to the even subalgebra of the GA of 3D Euclidean space, which unifies the three approaches.
Progress on the study of Clifford algebras quietly advanced through the twentieth century, although largely due to the work of abstract algebraists such as Élie Cartan, Hermann Weyl and Claude Chevalley. The geometrical approach to geometric algebras has seen a number of 20th-century revivals. In mathematics, Emil Artin's Geometric Algebra[48] discusses the algebra associated with each of a number of geometries, including affine geometry, projective geometry, symplectic geometry, and orthogonal geometry. In physics, geometric algebras have been revived as a "new" way to do classical mechanics and electromagnetism, together with more advanced topics such as quantum mechanics and gauge theory.[5] David Hestenes reinterpreted the Pauli and Dirac matrices as vectors in ordinary space and spacetime, respectively, and has been a primary contemporary advocate for the use of geometric algebra.
In computer graphics and robotics, geometric algebras have been revived in order to efficiently represent rotations and other transformations. For applications of GA in robotics (screw theory, kinematics and dynamics using versors), computer vision, control and neural computing (geometric learning) see Bayro (2010).
English translations of early books and papers
Research groups
Original source: https://en.wikipedia.org/wiki/Geometric algebra.
Read more |