This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
(Learn how and when to remove this template message)
|
Location estimation in wireless sensor networks is the problem of estimating the location of an object from a set of noisy measurements. These measurements are acquired in a distributed manner by a set of sensors.
Many civilian and military applications require monitoring that can identify objects in a specific area, such as monitoring the front entrance of a private house by a single camera. Monitored areas that are large relative to objects of interest often require multiple sensors (e.g., infra-red detectors) at multiple locations. A centralized observer or computer application monitors the sensors. The communication to power and bandwidth requirements call for efficient design of the sensor, transmission, and processing.
The CodeBlue system[1] of Harvard University is an example where a vast number of sensors distributed among hospital facilities allow staff to locate a patient in distress. In addition, the sensor array enables online recording of medical information while allowing the patient to move around. Military applications (e.g. locating an intruder into a secured area) are also good candidates for setting a wireless sensor network.
Let [math]\displaystyle{ \theta }[/math] denote the position of interest. A set of [math]\displaystyle{ N }[/math] sensors acquire measurements [math]\displaystyle{ x_n = \theta + w_n }[/math] contaminated by an additive noise [math]\displaystyle{ w_n }[/math] owing some known or unknown probability density function (PDF). The sensors transmit measurements to a central processor. The [math]\displaystyle{ n }[/math]th sensor encodes [math]\displaystyle{ x_n }[/math] by a function [math]\displaystyle{ m_n(x_n) }[/math]. The application processing the data applies a pre-defined estimation rule [math]\displaystyle{ \hat{\theta}=f(m_1(x_1),\cdot,m_N(x_N)) }[/math]. The set of message functions [math]\displaystyle{ m_n,\, 1\leq n\leq N }[/math] and the fusion rule [math]\displaystyle{ f(m_1(x_1),\cdot,m_N(x_N)) }[/math] are designed to minimize estimation error. For example: minimizing the mean squared error (MSE), [math]\displaystyle{ \mathbb{E}\|\theta-\hat{\theta}\|^2 }[/math].
Ideally, sensors transmit their measurements [math]\displaystyle{ x_n }[/math] right to the processing center, that is [math]\displaystyle{ m_n(x_n)=x_n }[/math]. In this settings, the maximum likelihood estimator (MLE) [math]\displaystyle{ \hat{\theta} = \frac{1}{N}\sum_{n=1}^N x_n }[/math] is an unbiased estimator whose MSE is [math]\displaystyle{ \mathbb{E}\|\theta-\hat{\theta}\|^2 = \text{var}(\hat{\theta}) = \frac{\sigma^2}{N} }[/math] assuming a white Gaussian noise [math]\displaystyle{ w_n\sim\mathcal{N}(0,\sigma^2) }[/math]. The next sections suggest alternative designs when the sensors are bandwidth constrained to 1 bit transmission, that is [math]\displaystyle{ m_n(x_n) }[/math]=0 or 1.
A Gaussian noise [math]\displaystyle{ w_n\sim\mathcal{N}(0,\sigma^2) }[/math] system can be designed as follows:
Here [math]\displaystyle{ \tau }[/math] is a parameter leveraging our prior knowledge of the approximate location of [math]\displaystyle{ \theta }[/math]. In this design, the random value of [math]\displaystyle{ m_n(x_n) }[/math] is distributed Bernoulli~[math]\displaystyle{ (q=F(\tau-\theta)) }[/math]. The processing center averages the received bits to form an estimate [math]\displaystyle{ \hat{q} }[/math] of [math]\displaystyle{ q }[/math], which is then used to find an estimate of [math]\displaystyle{ \theta }[/math]. It can be verified that for the optimal (and infeasible) choice of [math]\displaystyle{ \tau=\theta }[/math] the variance of this estimator is [math]\displaystyle{ \frac{\pi\sigma^2}{4} }[/math] which is only [math]\displaystyle{ \pi/2 }[/math] times the variance of MLE without bandwidth constraint. The variance increases as [math]\displaystyle{ \tau }[/math] deviates from the real value of [math]\displaystyle{ \theta }[/math], but it can be shown that as long as [math]\displaystyle{ |\tau-\theta|\sim\sigma }[/math] the factor in the MSE remains approximately 2. Choosing a suitable value for [math]\displaystyle{ \tau }[/math] is a major disadvantage of this method since our model does not assume prior knowledge about the approximated location of [math]\displaystyle{ \theta }[/math]. A coarse estimation can be used to overcome this limitation. However, it requires additional hardware in each of the sensors.
A system design with arbitrary (but known) noise PDF can be found in.[3] In this setting it is assumed that both [math]\displaystyle{ \theta }[/math] and the noise [math]\displaystyle{ w_n }[/math] are confined to some known interval [math]\displaystyle{ [-U,U] }[/math]. The estimator of [3] also reaches an MSE which is a constant factor times [math]\displaystyle{ \frac{\sigma^2}{N} }[/math]. In this method, the prior knowledge of [math]\displaystyle{ U }[/math] replaces the parameter [math]\displaystyle{ \tau }[/math] of the previous approach.
A noise model may be sometimes available while the exact PDF parameters are unknown (e.g. a Gaussian PDF with unknown [math]\displaystyle{ \sigma }[/math]). The idea proposed in [4] for this setting is to use two thresholds [math]\displaystyle{ \tau_1,\tau_2 }[/math], such that [math]\displaystyle{ N/2 }[/math] sensors are designed with [math]\displaystyle{ m_A(x)=I(x-\tau_1) }[/math], and the other [math]\displaystyle{ N/2 }[/math] sensors use [math]\displaystyle{ m_B(x)=I(x-\tau_2) }[/math]. The processing center estimation rule is generated as follows:
As before, prior knowledge is necessary to set values for [math]\displaystyle{ \tau_1,\tau_2 }[/math] to have an MSE with a reasonable factor of the unconstrained MLE variance.
The system design of [3] for the case that the structure of the noise PDF is unknown. The following model is considered for this scenario:
In addition, the message functions are limited to have the form
where each [math]\displaystyle{ S_n }[/math] is a subset of [math]\displaystyle{ [-2U,2U] }[/math]. The fusion estimator is also restricted to be linear, i.e. [math]\displaystyle{ \hat{\theta}=\sum\limits_{n=1}^{N}\alpha_n m_n(x_n) }[/math].
The design should set the decision intervals [math]\displaystyle{ S_n }[/math] and the coefficients [math]\displaystyle{ \alpha_n }[/math]. Intuitively, one would allocate [math]\displaystyle{ N/2 }[/math] sensors to encode the first bit of [math]\displaystyle{ \theta }[/math] by setting their decision interval to be [math]\displaystyle{ [0,2U] }[/math], then [math]\displaystyle{ N/4 }[/math] sensors would encode the second bit by setting their decision interval to [math]\displaystyle{ [-U,0]\cup[U,2U] }[/math] and so on. It can be shown that these decision intervals and the corresponding set of coefficients [math]\displaystyle{ \alpha_n }[/math] produce a universal [math]\displaystyle{ \delta }[/math]-unbiased estimator, which is an estimator satisfying [math]\displaystyle{ |\mathbb{E}(\theta-\hat{\theta})|\lt \delta }[/math] for every possible value of [math]\displaystyle{ \theta\in[-U,U] }[/math] and for every realization of [math]\displaystyle{ w_n\in\mathcal{P} }[/math]. In fact, this intuitive design of the decision intervals is also optimal in the following sense. The above design requires [math]\displaystyle{ N\geq\lceil\log\frac{8U}{\delta}\rceil }[/math] to satisfy the universal [math]\displaystyle{ \delta }[/math]-unbiased property while theoretical arguments show that an optimal (and a more complex) design of the decision intervals would require [math]\displaystyle{ N\geq\lceil\log\frac{2U}{\delta}\rceil }[/math], that is: the number of sensors is nearly optimal. It is also argued in [3] that if the targeted MSE [math]\displaystyle{ \mathbb{E}\|\theta-\hat{\theta}\|\leq\epsilon^2 }[/math] uses a small enough [math]\displaystyle{ \epsilon }[/math], then this design requires a factor of 4 in the number of sensors to achieve the same variance of the MLE in the unconstrained bandwidth settings.
The design of the sensor array requires optimizing the power allocation as well as minimizing the communication traffic of the entire system. The design suggested in [5] incorporates probabilistic quantization in sensors and a simple optimization program that is solved in the fusion center only once. The fusion center then broadcasts a set of parameters to the sensors that allows them to finalize their design of messaging functions [math]\displaystyle{ m_n(\cdot) }[/math] as to meet the energy constraints. Another work employs a similar approach to address distributed detection in wireless sensor arrays.[6]
Original source: https://en.wikipedia.org/wiki/Location estimation in sensor networks.
Read more |