Projection filters

From HandWiki - Reading time: 18 min

Short description: Geometric algorithms for signal processing

Projection filters are a set of algorithms based on stochastic analysis and information geometry, or the differential geometric approach to statistics, used to find approximate solutions for filtering problems for nonlinear state-space systems.[1][2][3] The filtering problem consists of estimating the unobserved signal of a random dynamical system from partial noisy observations of the signal. The objective is computing the probability distribution of the signal conditional on the history of the noise-perturbed observations. This distribution allows for calculations of all statistics of the signal given the history of observations. If this distribution has a density, the density satisfies specific stochastic partial differential equations (SPDEs) called Kushner-Stratonovich equation, or Zakai equation. It is known that the nonlinear filter density evolves in an infinite dimensional function space.[4][5]

One can choose a finite dimensional family of probability densities, for example Gaussian densities, Gaussian mixtures, or exponential families, on which the infinite-dimensional filter density can be approximated. The basic idea of the projection filter is to use a geometric structure in the chosen spaces of densities to project the infinite dimensional SPDE of the optimal filter onto the chosen finite dimensional family, obtaining a finite dimensional stochastic differential equation (SDE) for the parameter of the density in the finite dimensional family that approximates the full filter evolution.[3] To do this, the chosen finite dimensional family is equipped with a manifold structure as in information geometry. The projection filter was tested against the optimal filter for the cubic sensor problem. The projection filter could track effectively bimodal densities of the optimal filter that would have been difficult to approximate with standard algorithms like the extended Kalman filter.[2][6] Projection filters are ideal for in-line estimation, as they are quick to implement and run efficiently in time, providing a finite dimensional SDE for the parameter that can be implemented efficiently.[2] Projection filters are also flexible, as they allow fine tuning the precision of the approximation by choosing richer approximating families, and some exponential families make the correction step in the projection filtering algorithm exact.[3] Some formulations coincide with heuristic based assumed density filters[3] or with Galerkin methods.[6] Projection filters can also approximate the full infinite-dimensional filter in an optimal way, beyond the optimal approximation of the SPDE coefficients alone, according to precise criteria such as mean square minimization.[7] Projection filters have been studied by the Swedish Defense Research Agency[1] and have also been successfully applied to a variety of fields including navigation, ocean dynamics, quantum optics and quantum systems, estimation of fiber diameters, estimation of chaotic time series, change point detection and other areas.[8]

History and development

The term "projection filter" was first coined in 1987 by Bernard Hanzon,[9] and the related theory and numerical examples were fully developed, expanded and made rigorous during the Ph.D. work of Damiano Brigo, in collaboration with Bernard Hanzon and Francois LeGland.[10][2][3] These works dealt with the projection filters in Hellinger distance and Fisher information metric, that were used to project the optimal filter infinite-dimensional SPDE on a chosen exponential family. The exponential family can be chosen so as to make the prediction step of the filtering algorithm exact.[2] A different type of projection filters, based on an alternative projection metric, the direct [math]\displaystyle{ L^2 }[/math] metric, was introduced in Armstrong and Brigo (2016).[6] With this metric, the projection filters on families of mixture distributions coincide with filters based on Galerkin methods. Later on, Armstrong, Brigo and Rossi Ferrucci (2021)[7] derive optimal projection filters that satisfy specific optimality criteria in approximating the infinite dimensional optimal filter. Indeed, the Stratonovich-based projection filters optimized the approximations of the SPDE separate coefficients on the chosen manifold but not the SPDE solution as a whole. This has been dealt with by introducing the optimal projection filters. The innovation here is to work directly with Ito calculus, instead of resorting to the Stratonovich calculus version of the filter equation. This is based on research on the geometry of Ito Stochastic differential equations on manifolds based on the jet bundle, the so-called 2-jet interpretation of Ito stochastic differential equations on manifolds.[11]

Projection filters derivation

Here the derivation of the different projection filters is sketched.

Stratonovich-based projection filters

This is a derivation of both the initial filter in Hellinger/Fisher metric sketched by Hanzon[9] and fully developed by Brigo, Hanzon and LeGland,[10][2] and the later projection filter in direct L2 metric by Armstrong and Brigo (2016).[6]

It is assumed that the unobserved random signal [math]\displaystyle{ X_t \in \R^m }[/math] is modelled by the Ito stochastic differential equation:

[math]\displaystyle{ d X_t = f(X_t,t) \, d t + \sigma(X_t,t) \, d W_t }[/math]

where f and [math]\displaystyle{ \sigma\, dW }[/math] are [math]\displaystyle{ \R^m }[/math] valued and [math]\displaystyle{ W_t }[/math] is a Brownian motion. Validity of all regularity conditions necessary for the results to hold will be assumed, with details given in the references. The associated noisy observation process [math]\displaystyle{ Y_t \in \R^d }[/math] is modelled by

[math]\displaystyle{ d Y_t = b(X_t,t) \, d t + d V_t }[/math]

where [math]\displaystyle{ b }[/math] is [math]\displaystyle{ \R^d }[/math] valued and [math]\displaystyle{ V_t }[/math] is a Brownian motion independent of [math]\displaystyle{ W_t }[/math]. As hinted above, the full filter is the conditional distribution of [math]\displaystyle{ X_t }[/math] given a prior for [math]\displaystyle{ X_0 }[/math] and the history of [math]\displaystyle{ Y }[/math] up to time [math]\displaystyle{ t }[/math]. If this distribution has a density described informally as

[math]\displaystyle{ p_t(x)dx = Prob\{X_t \in dx | \sigma(Y_s, s\leq t)\} }[/math]

where [math]\displaystyle{ \sigma(Y_s, s\leq t) }[/math] is the sigma-field generated by the history of noisy observations [math]\displaystyle{ Y }[/math] up to time [math]\displaystyle{ t }[/math], under suitable technical conditions the density [math]\displaystyle{ p_t }[/math] satisfies the Kushner—Stratonovich SPDE:

[math]\displaystyle{ d p_t = {\cal L}^*_t p_t \ d t + p_t[b(\cdot,t) - E_{p_t}(b(\cdot,t))]^T [ d Y_t - E_{p_t}(b(\cdot,t)) dt] }[/math]

where [math]\displaystyle{ E_p }[/math] is the expectation [math]\displaystyle{ E_p[h] = \int h(x) p(x) dx, }[/math] and the forward diffusion operator [math]\displaystyle{ {\cal L}^*_t }[/math] is

[math]\displaystyle{ {\cal L}_t^* p = - \sum_{i=1}^m \frac{\partial}{\partial x_i} [ f_i(x,t) p_t(x) ] + \frac{1}{2} \sum_{i,j=1}^m \frac{\partial^2}{\partial x_i \partial x_j} [a_{ij}(x,t) p_t(x)] }[/math]

where [math]\displaystyle{ a=\sigma \sigma^T }[/math] and [math]\displaystyle{ T }[/math] denotes transposition. To derive the first version of the projection filters, one needs to put the [math]\displaystyle{ p_t }[/math] SPDE in Stratonovich form. One obtains

[math]\displaystyle{ d p_t = {\cal L}^\ast_t\, p_t\,dt - \frac{1}{2}\, p_t\, [\vert b(\cdot,t) \vert^2 - E_{p_t}\{\vert b(\cdot,t) \vert^2\}] \,dt + p_t\, [b(\cdot,t)-E_{p_t}\{b(\cdot,t)\}]^T \circ dY_t\ . }[/math]

Through the chain rule, it's immediate to derive the SPDE for [math]\displaystyle{ d \sqrt{p_t} }[/math]. To shorten notation one may rewrite this last SPDE as [math]\displaystyle{ dp = F(p) \,dt + G^T(p) \circ dY\ , }[/math]

where the operators [math]\displaystyle{ F(p) }[/math] and [math]\displaystyle{ G^T(p) }[/math] are defined as

[math]\displaystyle{ F(p) = {\cal L}^\ast_t\, p\, - \frac{1}{2}\, p\, [\vert b(\cdot,t) \vert^2 - E_{p}\{\vert b(\cdot,t) \vert^2\}], }[/math]
[math]\displaystyle{ G^T(p) = p\, [b(\cdot,t)-E_{p}\{b(\cdot,t)\}]^T. }[/math]

The square root version is [math]\displaystyle{ d \sqrt{p} = \frac{1}{2 \sqrt{p}}[ F(p) \,dt + G^T(p) \circ dY]\ . }[/math]

These are Stratonovich SPDEs whose solutions evolve in infinite dimensional function spaces. For example [math]\displaystyle{ p_t }[/math] may evolve in [math]\displaystyle{ L^2 }[/math] (direct metric [math]\displaystyle{ d_2 }[/math])

[math]\displaystyle{ d_2(p_1,p_2)= \Vert p_1- p_2 \Vert\ , \ \ p_{1,2}\in L^2 }[/math]

or [math]\displaystyle{ \sqrt{p_t} }[/math] may evolve in [math]\displaystyle{ L^2 }[/math] (Hellinger metric [math]\displaystyle{ d_H }[/math])

[math]\displaystyle{ d_H(\sqrt{p_1},\sqrt{p_2})= \Vert \sqrt{p_1}-\sqrt{p_2} \Vert , \ \ \ p_{1,2}\in L^1 }[/math]

where [math]\displaystyle{ \Vert\cdot\Vert }[/math] is the norm of Hilbert space [math]\displaystyle{ L^2 }[/math]. In any case, [math]\displaystyle{ p_t }[/math] (or [math]\displaystyle{ \sqrt{p_t} }[/math]) will not evolve inside any finite dimensional family of densitities,

[math]\displaystyle{ S_\Theta=\{p(\cdot, \theta), \ \theta \in \Theta \subset \R^n\} \ (or \ S_\Theta^{1/2}=\{\sqrt{p(\cdot, \theta)}, \ \theta \in \Theta \subset \R^n\}). }[/math]

The projection filter idea is approximating [math]\displaystyle{ p_t(x) }[/math] (or [math]\displaystyle{ \sqrt{p_t(x)} }[/math]) via a finite dimensional density [math]\displaystyle{ p(x,\theta_t) }[/math] (or [math]\displaystyle{ \sqrt{p(x,\theta_t)} }[/math]).

The fact that the filter SPDE is in Stratonovich form allows for the following. As Stratonovich SPDEs satisfy the chain rule, [math]\displaystyle{ F }[/math] and [math]\displaystyle{ G }[/math] behave as vector fields. Thus, the equation is characterized by a [math]\displaystyle{ dt }[/math] vector field [math]\displaystyle{ F }[/math] and a [math]\displaystyle{ dY_t }[/math] vector field [math]\displaystyle{ G }[/math]. For this version of the projection filter one is satisfied with dealing with the two vector fields separately. One may project [math]\displaystyle{ F }[/math] and [math]\displaystyle{ G }[/math] on the tangent space of the densities in [math]\displaystyle{ S_\Theta }[/math] (direct metric) or of their square roots (Hellinger metric). The direct metric case yields

[math]\displaystyle{ dp(\cdot,\theta_t) = \Pi_{p(\cdot,\theta_t)}[F(p(\cdot,\theta_t))] \,dt + \Pi_{p(\cdot,\theta_t)}[G^T(p(\cdot,\theta_t))] \circ dY_t\ }[/math]

where [math]\displaystyle{ \Pi_{p(\cdot,\theta)} }[/math] is the tangent space projection at the point [math]\displaystyle{ p(\cdot,\theta) }[/math] for the manifold [math]\displaystyle{ S_\Theta }[/math], and where, when applied to a vector such as [math]\displaystyle{ G^T }[/math], it is assumed to act component-wise by projecting each of [math]\displaystyle{ G^T }[/math]'s components. As a basis of this tangent space is

[math]\displaystyle{ \left\{ \frac{\partial p(\cdot,\theta)}{\partial \theta_1},\cdots, \frac{\partial p(\cdot,\theta)}{\partial \theta_n} \right\}, }[/math]

by denoting the inner product of [math]\displaystyle{ L^2 }[/math] with [math]\displaystyle{ \langle \cdot, \cdot \rangle }[/math], one defines the metric

[math]\displaystyle{ \gamma_{ij}(\theta) = \left\langle \frac{\partial {p(\cdot,\theta)}}{ \partial \theta_i}\, , \frac{\partial {p(\cdot,\theta)}}{ \partial \theta_j} \right\rangle = \int \frac{\partial p(x,\theta)}{\partial \theta_i}\, \frac{\partial p(x,\theta)}{\partial \theta_j}\, d x }[/math]

and the projection is thus

[math]\displaystyle{ \Pi^\gamma_{p(\cdot,\theta)} [v] = \sum_{i=1}^n \left[ \sum_{j=1}^n \gamma^{ij}(\theta)\; \left\langle v,\, \frac{\partial {p(\cdot,\theta)}}{\partial \theta_j} \right\rangle \right]\; \frac{\partial {p(\cdot,\theta)}}{\partial \theta_i} }[/math]

where [math]\displaystyle{ \gamma^{ij} }[/math] is the inverse of [math]\displaystyle{ \gamma_{ij} }[/math]. The projected equation thus reads

[math]\displaystyle{ d p(\cdot, \theta_t) = \Pi_{p(\cdot,\theta)}[F(p(\cdot, \theta_t))] dt + \Pi_{p(\cdot,\theta)}[G^T(p(\cdot, \theta_t))] \circ dY_t }[/math]

which can be written as

[math]\displaystyle{ \sum_{i=1}^n \frac{\partial p(\cdot, \theta_t)}{\theta_i}\circ d \theta_i = \sum_{i=1}^n \left[ \sum_{j=1}^n \gamma^{ij}(\theta)\; \left\langle F(p(\cdot, \theta_t)),\, \frac{\partial {p(\cdot,\theta)}}{\partial \theta_j} \right\rangle \right]\; \frac{\partial {p(\cdot,\theta)}}{\partial \theta_i} dt + \sum_{i=1}^n \left[ \sum_{j=1}^n \gamma^{ij}(\theta)\; \left\langle G^T(p(\cdot, \theta_t)),\, \frac{\partial {p(\cdot,\theta)}}{\partial \theta_j} \right\rangle \right]\; \frac{\partial {p(\cdot,\theta)}}{\partial \theta_i} \circ dY_t , }[/math]

where it has been crucial that Stratonovich calculus obeys the chain rule. From the above equation, the final projection filter SDE is [math]\displaystyle{ d \theta_i = \left[\sum_{j=1}^n \gamma^{ij}(\theta_t)\; \int F(p(x, \theta_t)) \; \frac{\partial p(x,\theta_t)}{\partial \theta_j} dx \right] dt + \sum_{k=1}^d\; \left[ \sum_{j=1}^n \gamma^{ij}(\theta_t)\; \int G_k(p(x, \theta_t)) \; \frac{\partial p(x,\theta_t)}{\partial \theta_j}\; d x \right] \circ dY_k }[/math]

with initial condition a chosen [math]\displaystyle{ \theta_0 }[/math].

By substituting the definition of the operators F and G we obtain the fully explicit projection filter equation in direct metric:

[math]\displaystyle{ d \theta_i(t) = \left[\sum_{j=1}^m \gamma^{ij}(\theta_t)\; \int {{\cal L}_t^\ast\, p(x,\theta_t)}\; \frac{\partial p(x,\theta_t)}{\partial \theta_j} dx - \sum_{j=1}^m \gamma^{ij}(\theta_t)\; \int \frac{1}{2} \left[\vert b(x,t) \vert^2 - \int \vert b(z,t) \vert^2 p(z,\theta_t)dz\right] \; p(x,\theta_t) \; \frac{\partial p(x,\theta_t)}{\partial \theta_j}\; d x \right] dt }[/math]

[math]\displaystyle{ + \sum_{k=1}^d\; \left[ \sum_{j=1}^m \gamma^{ij}(\theta_t)\; \int \left[ b_k(x,t) - \int b_k(z,t) p(z,\theta_t) dz \right] \; p(x,\theta_t) \; \frac{\partial p(x,\theta_t)}{\partial \theta_j}\; d x \right] \circ dY_t^k\ . }[/math]

If one uses the Hellinger distance instead, square roots of densities are needed. The tangent space basis is then

[math]\displaystyle{ \left\{ \frac{\partial\sqrt{ p(\cdot,\theta)}}{\partial \theta_1},\cdots, \frac{\partial \sqrt{p(\cdot,\theta)}}{\partial \theta_n} \right\}, }[/math]

and one defines the metric

[math]\displaystyle{ \frac{1}{4} g_{ij}(\theta) = \left \langle \frac{\partial \sqrt{p}}{ \partial \theta_i}\, , \frac{\partial \sqrt{p}}{ \partial \theta_j}\right \rangle = \frac{1}{4} \int \frac{1}{p(x,\theta)}\, \frac{\partial p(x,\theta)}{\partial \theta_i}\, \frac{\partial p(x,\theta)}{\partial \theta_j}\, d x . }[/math]

The metric [math]\displaystyle{ g }[/math] is the Fisher information metric. One follows steps completely analogous to the direct metric case and the filter equation in Hellinger/Fisher metric is

[math]\displaystyle{ d \theta_i = \left[ \sum_{j=1}^n g^{ij}(\theta_t)\; \int \frac{F(p(x,\theta_t))}{p(x,\theta_t)}\; \frac{\partial p(x,\theta_t)}{\partial \theta_j}\; dx \right] dt + \sum_{k=1}^d\; \left[ \sum_{j=1}^m g^{ij}(\theta_t)\; \int \frac{G_k(p(x,\theta_t))}{p(x,\theta_t)}\; \frac{\partial p(x,\theta_t)}{\partial \theta_j}\; dx \right] \circ dY_t^k\ , }[/math]

again with initial condition a chosen [math]\displaystyle{ \theta_0 }[/math].

Substituting F and G one obtains [math]\displaystyle{ d \theta_i(t) = \left[ \sum_{j=1}^m g^{ij}(\theta_t)\; \int \frac{{\cal L}_t^\ast\, p(x,\theta_t)}{p(x,\theta_t)}\; \frac{\partial p(x,\theta_t)}{\partial \theta_j}\; dx - \sum_{j=1}^m g^{ij}(\theta_t) \int \frac{1}{2} \vert b_t(x) \vert^2 \frac{\partial p(x,\theta_t)}{\partial \theta_j} dx \right] dt }[/math]

[math]\displaystyle{ + \sum_{k=1}^d\; \left[ \sum_{j=1}^m g^{ij}(\theta_t)\; \int b_k(x,t)\; \frac{\partial p(x,\theta_t)}{\partial \theta_j}\; dx \right] \circ dY_t^k\ . }[/math]

The projection filter in direct metric, when implemented on a manifold [math]\displaystyle{ S_\Theta }[/math] of mixture families, leads to equivalence with a Galerkin method.[6]

The projection filter in Hellinger/Fisher metric when implemented on a manifold [math]\displaystyle{ S_\Theta^{1/2} }[/math] of square roots of an exponential family of densities is equivalent to the assumed density filters.[3]

One should note that it is also possible to project the simpler Zakai equation for an unnormalized version of the density p. This would result in the same Hellinger projection filter but in a different direct metric projection filter.[6]

Finally, if in the exponential family case one includes among the sufficient statistics of the exponential family the observation function in [math]\displaystyle{ dY_t }[/math], namely [math]\displaystyle{ b(x) }[/math]'s components and [math]\displaystyle{ |b(x)|^2 }[/math], then one can see that the correction step in the filtering algorithm becomes exact. In other terms, the projection of the vector field [math]\displaystyle{ G }[/math] is exact, resulting in [math]\displaystyle{ G }[/math] itself. Writing the filtering algorithm in a setting with continuous state [math]\displaystyle{ X }[/math] and discrete time observations [math]\displaystyle{ Y }[/math], one can see that the correction step at each new observation is exact, as the related Bayes formula entails no approximation.[3]

Optimal projection filters based on Ito vector and Ito jet projections

Now rather than considering the exact filter SPDE in Stratonovich calculus form, one keeps it in Ito calculus form

[math]\displaystyle{ d p_t = {\cal L}^*_t p_t \ d t + p_t[b(\cdot,t) - E_{p_t}(b(\cdot,t))]^T [ d Y_t - E_{p_t}(b(\cdot,t)) dt]. }[/math]

In the Stratonovich projection filters above, the vector fields [math]\displaystyle{ F }[/math] and [math]\displaystyle{ G }[/math] were projected separately. By definition, the projection is the optimal approximation for [math]\displaystyle{ F }[/math] and [math]\displaystyle{ G }[/math] separately, although this does not imply it provides the best approximation for the filter SPDE solution as a whole. Indeed, the Stratonovich projection, acting on the two terms [math]\displaystyle{ F }[/math] and [math]\displaystyle{ G }[/math] separately, does not guarantee optimality of the solution [math]\displaystyle{ p(\cdot,\theta_{0+\delta t}) }[/math] as an approximation of the exact [math]\displaystyle{ p_{0+\delta t} }[/math] for say small [math]\displaystyle{ \delta t }[/math]. One may look for a norm [math]\displaystyle{ \| \cdot \| }[/math] to be applied to the solution, for which

[math]\displaystyle{ \theta_{0+\delta t} \approx \mbox{argmin}_\theta\ \| p_{0+\delta t}- p(\cdot,\theta) \|. }[/math]

The Ito-vector projection is obtained as follows. Let us choose a norm for the space of densities, [math]\displaystyle{ \|\cdot \| }[/math], which might be associated with the direct metric or the Hellinger metric.

One chooses the diffusion term in the approximating Ito equation for [math]\displaystyle{ \theta_t }[/math] by minimizing (but not zeroing) the [math]\displaystyle{ \delta t }[/math] term of the Taylor expansion for the mean square error

[math]\displaystyle{ E_t[\|p_{0+\delta t}-p(\cdot,\theta_{0+\delta t})\|^2] }[/math],

finding the drift term in the approximating Ito equation that minimizes the [math]\displaystyle{ (\delta t)^2 }[/math] term of the same difference. Here the [math]\displaystyle{ \delta t }[/math] order term is minimized, not zeroed, and one never attains [math]\displaystyle{ (\delta t)^2 }[/math] convergence, only [math]\displaystyle{ \delta t }[/math] convergence.

A further benefit of the Ito vector projection is that it minimizes the order 1 Taylor expansion in [math]\displaystyle{ \delta t }[/math] of

[math]\displaystyle{ \|E[p_{0+\delta t}-p(\cdot,\theta_{0+\delta t})]\|. }[/math]

To achieve [math]\displaystyle{ (\delta t)^2 }[/math] convergence, rather than [math]\displaystyle{ \delta t }[/math] convergence, the Ito-jet projection is introduced. It is based on the notion of metric projection.

The metric projection of a density [math]\displaystyle{ p \in L^2 }[/math] (or [math]\displaystyle{ \sqrt{p} \in L^2 }[/math]) onto the manifold [math]\displaystyle{ S_\Theta }[/math] (or [math]\displaystyle{ S_\Theta^{1/2} }[/math]) is the closest point on [math]\displaystyle{ S_\Theta }[/math] (or [math]\displaystyle{ S_\Theta^{1/2} }[/math]) to [math]\displaystyle{ p }[/math] (or [math]\displaystyle{ \sqrt{p} }[/math]). Denote it by [math]\displaystyle{ \pi(p) }[/math]. The metric projection is, by definition, according to the chosen metric, the best one can ever do for approximating [math]\displaystyle{ p }[/math] in [math]\displaystyle{ S_\Theta }[/math]. Thus the idea is finding a projection filter that comes as close as possible to the metric projection. In other terms, one considers the criterion [math]\displaystyle{ \theta_{0+\delta t} \approx \mbox{argmin}_\theta\ \| \pi(p_{0+\delta t})- p(\cdot,\theta) \|. }[/math]

The detailed calculations are lengthy and laborious,[7] but the resulting approximation achieves [math]\displaystyle{ (\delta t)^2 }[/math] convergence. Indeed, the Ito jet projection attains the following optimality criterion. It zeroes the [math]\displaystyle{ \delta t }[/math] order term and it minimizes the [math]\displaystyle{ (\delta t)^2 }[/math] order term of the Taylor expansion of the mean square distance in [math]\displaystyle{ L^2 }[/math] between [math]\displaystyle{ \pi(p_{0+\delta t}) }[/math] and [math]\displaystyle{ p(\cdot,\theta_{0+\delta t}) }[/math].

Both the Ito vector and the Ito jet projection result in final SDEs, driven by the observations [math]\displaystyle{ dY }[/math], for the parameter [math]\displaystyle{ \theta_t }[/math] that best approximates the exact filter evolution for small times.[7]

Applications

Jones and Soatto (2011) mention projection filters as possible algorithms for on-line estimation in visual-inertial navigation,[12] mapping and localization, while again on navigation Azimi-Sadjadi and Krishnaprasad (2005)[13] use projection filters algorithms. The projection filter has been also considered for applications in ocean dynamics by Lermusiaux 2006.[14] Kutschireiter, Rast, and Drugowitsch (2022)[15] refer to the projection filter in the context of continuous time circular filtering. For quantum systems applications, see for example van Handel and Mabuchi (2005),[16] who applied the quantum projection filter to quantum optics, studying a quantum model of optical phase bistability of a strongly coupled two-level atom in an optical cavity. Further applications to quantum systems are considered in Gao, Zhang and Petersen (2019).[17] Ma, Zhao, Chen and Chang (2015) refer to projection filters in the context of hazard position estimation, while Vellekoop and Clark (2006)[18] generalize the projection filter theory to deal with changepoint detection. Harel, Meir and Opper (2015)[19] apply the projection filters in assumed density form to the filtering of optimal point processes with applications to neural encoding. Broecker and Parlitz (2000)[20] study projection filter methods for noise reduction in chaotic time series. Zhang, Wang, Wu and Xu (2014) [21] apply the Gaussian projection filter as part of their estimation technique to deal with measurements of fiber diameters in melt-blown nonwovens.

See also

References

  1. 1.0 1.1 "Swedish Defense Research Agency Scientific Report" (PDF). http://www.foi.se/ReportFiles/foir_1074.pdf. 
  2. 2.0 2.1 2.2 2.3 2.4 2.5 Brigo, Damiano; Hanzon, Bernard; LeGland, Francois (1998). "A differential geometric approach to nonlinear filtering: the projection filter". IEEE Transactions on Automatic Control 43 (2): 247–252. 
  3. 3.0 3.1 3.2 3.3 3.4 3.5 3.6 Brigo, Damiano; Hanzon, Bernard; LeGland, Francois (1999). "Approximate nonlinear filtering by projection on exponential manifolds of densities.". Bernoulli 5 (3): 407–430. 
  4. Chaleyat-Maurel, Mireille and Dominique Michel (1984), Des resultats de non existence de filtre de dimension finie. Stochastics, volume 13, issue 1+2, pages 83–102.
  5. M. Hazewinkel, S.I. Marcus, H.J. Sussmann (1983). Nonexistence of finite-dimensional filters for conditional statistics of the cubic sensor problem. Systems & Control Letters 3(6), Pages 331-340, https://doi.org/10.1016/0167-6911(83)90074-9.
  6. 6.0 6.1 6.2 6.3 6.4 6.5 Armstrong, John; Brigo, Damiano (2016). "Nonlinear filtering via stochastic PDE projection on mixture manifolds in L2 direct metric". Mathematics of Control, Signals and Systems 28 (1): 1–33. 
  7. 7.0 7.1 7.2 7.3 Armstrong, John; Brigo, Damiano; Rossi Ferrucci, Emilio (2019). "Optimal approximation of {SDE}s on submanifolds: the Ito-vector and Ito-jet projections". Proceedings of the London Mathematical Society 119 (1): 176–213. 
  8. Armstrong, J., Brigo, D., and Hanzon, B. (2023). Optimal projection filters with information geometry. Info. Geo. (2023). https://doi.org/10.1007/s41884-023-00108-x
  9. 9.0 9.1 Bernard Hanzon (1987). A differential-geometric approach to approximate nonlinear filtering. In: C.T.J. Dodson, Editor, Geometrization of Statistical Theory, pages 219–223. ULMD Publications, University of Lancaster
  10. 10.0 10.1 Brigo, D. (1996). Filtering by projection on the manifold of exponential densities. PhD dissertation, Free University of Amsterdam
  11. John Armstrong and Damiano Brigo (2018). Intrinsic stochastic differential equations as jets. Proceedings of the Royal Society A - Mathematical physical and engineering sciences, 474(2210), 28 pages. doi: 10.1098/rspa.2017.0559.
  12. Jones, Eagle S; Soatto, Massimo (2011). "Visual-inertial navigation, mapping and localization: A scalable real-time causal approach". The International Journal of Robotics Research 30 (4): 407–430. 
  13. Azimi-Sadjadi, Babak; Krishnaprasad, P.S. (2005). "Approximate nonlinear filtering and its application in navigation". Automatica 41 (6): 945–956. 
  14. Lermusiaux, Pierre F. J (2006). "Uncertainty estimation and prediction for interdisciplinary ocean dynamics". Journal of Computational Physics 217 (1): 176–199. 
  15. Kutschireiter, Anna; Rast, Luke; Drugowitsch, Jan (2022). "Projection filtering with observed state increments with applications in continuous-time circular filtering". IEEE Transactions on Signal Processing 70. 
  16. van Handel, Ramon; Mabuchi, Hideo (2005). "Quantum projection filter for a highly nonlinear model in cavity QED". Journal of Optics B: Quantum and Semiclassical Optics 7. 
  17. Gao, Qing; Gao, Guofeng; Petersen, Ian R (2019). "An exponential quantum projection filter for open quantum systems". Automatica 99: 59–68. 
  18. Vellekoop, M. H.; Clark, J. M. C. (2006). "A nonlinear filtering approach to changepoint detection problems: Direct and differential-geometric methods". SIAM Review 48 (2): 329–356. 
  19. Harel, Yuval; Meir, Ron; Opper, Manfred (2015). "A tractable approximation to optimal point process filtering: Application to neural encoding". Advances in Neural Information Processing Systems 28. 
  20. Broecker, Jochen; Parlitz, Ulrich (2000). "Noise reduction and filtering of chaotic time series". Proc. NOLTA 2000. 
  21. Zhang, Xian Miao; Wu Wang, Rong; Xu, Bugau (2014). "Automated measurements of fiber diameters in melt-blown nonwovens". Journal of Industrial Textiles 43 (4): 593–605. 




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Projection_filters
3 views | Status: cached on September 19 2024 11:04:06
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF