Data Compression - Quantization for Distributed Estimation - Sensor Fusion

Vasileios Megalooikonomou and Yaacov Yesha

Introduction and Objectives

Networks of embedded sensors are becoming increasingly important especially due to their potentially enormous impact in environmental monitoring, product quality control, defense systems, etc. New technologies such as MEMS (MicroElectroMechanical Systems) (CMU) and Smart-Dust (Berkeley) are expected to expand the capabilities of embedded devices and networks of sensors by putting a complete sensing/communication platform inside a cubic millimeter.

In distributed estimation systems an estimation of a certain parameter has to be made at a central node (fusion center) using data collected from peripheral nodes (sensors). Classical estimation theory assumes that the estimator has direct access to the data. However, in practice, due to capacity constraints at the communication channels, the data is transmitted at rates insufficient to convey all the observations reliably. The estimation is achieved via compressed information. A distributed estimation system that has received a lot of attention is that of a single fusion center with a number of remote sensors (see Figure 1). This scheme serves as a model for many applications from radar (e.g., target tracking using multiple radars) and satellite-based remote-sensing systems (LANDSAT), to sonar and seismology. We consider the problem of designing the compressors (quantizers) for the distributed estimation system in the practically important and challenging case where the distribution is unknown.

Problem Definition

The problem is defined as follows: For a distributed system with k sensors, find, for each sensor, a mapping from the observation space to codewords (of a certain number of bits given by the capacity constraints), and find a fusion center function that maps a vector of k codewords to an estimate vector for the unobserved quantities, so that the mean of the square of the Euclidean norm of the estimation error is minimized. There is a joint probability distribution of all observations and unobserved quantities. We assume that this distribution is unknown so the design of the system is based on a training set and the mean squared error is computed based on a test set.

Technical Approach

We have introduced an approach that is based on a generalization of regression trees. Our scheme involves growing and pruning of regression trees along with some labeling techniques for iteratively decreasing the estimation error. Using simulations we show that although we do not make any assumption about the observation statistics, the performance of our system is similar to the performance of the Lam-Reibman qunatization system that assumes the distribution to be known. We have also proposed the use of neural network quantizers. In this approach we first apply a variation of the Cyclic Generalized Lloyd's Algorithm (CGLA) to every point of the training set in order to find the proper codeword for every one of these points. The initial codewords are given by the previously proposed regression tree approach. We use a neural network representation for each quantizer in order to represent the training points along with their associated codewords. To reduce the storage requirements at the central node we have proposed two approaches: The first approach gives a direct sum estimation of the parameter. The second approach is based on neural networks and allows more flexible trade-offs between storage complexity of the central node and performance of the quantizers.


[1] V. Megalooikonomou and Y. Yesha, "Space Efficient Quantization for Decentralized Estimation by a Multi-sensor Fusion System", Information Fusion, Vol. 5, No. 4, pp. 299-308, 2004.

[2] V. Megalooikonomou and Y. Yesha, "Quantization for Distributed Estimation using Neural Networks", Information Sciences, Vol. 148, No. 1-4, pp. 185-199, 2002.

[3] V. Megalooikonomou and Y. Yesha, "Quantizer Design for Distributed Estimation with Communication Constraints and Unknown Observation Statistics", IEEE Transactions on Communications, Vol. 48, No. 2, pp. 181-184, 2000.

[4] V. Megalooikonomou and Y. Yesha, "Design of Neural Network Quantizers for a Distributed Estimation System with Communication Constraints", in Proceedings of the IEEE International Conference Acoustics, Speech, Signal Processing (ICASSP), Seattle, Washington, pp. 3469-3472, May 1998.

[5] V. Megalooikonomou and Y. Yesha, "Quantization for Distributed Estimation with Communication and Storage Constraints", in Proceedings of 35th Annual Allerton Conference on Communications, Control, and Computing, Urbana, Illinois, pp. 102-112, Sept. 1997.

[6] V. Megalooikonomou and Y. Yesha, "Quantization for Distributed Estimation with Unknown Observation Statistics", in Proceedings of the 31st Annual Conference on Information Sciences and Systems, Baltimore, Maryland, pp. 138-143, Mar. 1997.

[7] V. Megalooikonomou, "Quantization for Distributed Estimation with Unknown Observation Statistics", Doctoral Dissertation, Computer Science, University of Maryland, Baltimore County, 1997.