Abstract
Our lives today are constantly aided and enriched by various types of sensors, which are deployed ubiquitously. Multimodal or heterogeneous signal processing refers to the joint analyses and fusion of data from a variety of sensors (e.g., acoustic, seismic, magnetic, video, and infrared) to solve a common inference problem. Such a system offers several advantages and new possibilities for system improvement in many practical applications. For example, speech perception is known to be a bimodal process that involves both auditory and visual inputs [1]. Visual cues such as lip movements of the speaker have shown to improve speech intelligibility significantly, especially in environments where the auditory signal is compromised. In addition, much useful information can be extracted from the joint analysis of the different modalities. The use of multiple modalities may provide complementary information and thus increase the accuracy of the overall decision-making process, for example, fusion of “functional” images from positron emission tomography (PET) with “structural” data from magnetic resonance imaging (MRI).
Original language | English (US) |
---|---|
Title of host publication | Multisensor Data Fusion |
Subtitle of host publication | From Algorithms and Architectural Design to Applications |
Publisher | CRC Press |
Pages | 127-146 |
Number of pages | 20 |
ISBN (Electronic) | 9781482263756 |
ISBN (Print) | 9781482263749 |
DOIs | |
State | Published - Jan 1 2017 |
Externally published | Yes |
ASJC Scopus subject areas
- General Engineering
- General Physics and Astronomy