Distributed detection and data fusion with heterogeneous sensors

Satish G. Iyengar, Hao He, Arun Subramanian, Ruixin Niu, Pramod K. Varshney, Thyagaraju Damarla

Research output: Chapter in Book/Entry/PoemChapter

1 Scopus citations


Our lives today are constantly aided and enriched by various types of sensors, which are deployed ubiquitously. Multimodal or heterogeneous signal processing refers to the joint analyses and fusion of data from a variety of sensors (e.g., acoustic, seismic, magnetic, video, and infrared) to solve a common inference problem. Such a system offers several advantages and new possibilities for system improvement in many practical applications. For example, speech perception is known to be a bimodal process that involves both auditory and visual inputs [1]. Visual cues such as lip movements of the speaker have shown to improve speech intelligibility significantly, especially in environments where the auditory signal is compromised. In addition, much useful information can be extracted from the joint analysis of the different modalities. The use of multiple modalities may provide complementary information and thus increase the accuracy of the overall decision-making process, for example, fusion of “functional” images from positron emission tomography (PET) with “structural” data from magnetic resonance imaging (MRI).

Original languageEnglish (US)
Title of host publicationMultisensor Data Fusion
Subtitle of host publicationFrom Algorithms and Architectural Design to Applications
PublisherCRC Press
Number of pages20
ISBN (Electronic)9781482263756
ISBN (Print)9781482263749
StatePublished - Jan 1 2017
Externally publishedYes

ASJC Scopus subject areas

  • General Engineering
  • General Physics and Astronomy


Dive into the research topics of 'Distributed detection and data fusion with heterogeneous sensors'. Together they form a unique fingerprint.

Cite this