Abstract
Several surveillance applications are characterized by the ability to gather information about the scene from more than one sensor modality, and heterogeneous sensor data must then be fused by the decision-maker. In this paper, we discuss the issues relevant to developing a model for fusion of information from audio and visual sensors, and present a framework to enhance decision-making capabilities. In particular, our methodology focuses on the issues of temporal reasoning, uncertainty representations, and coupling between features inferred from data streams coming from different sensors. We propose a conditional probability-based representation for uncertainty, along with fuzzy rules to assist decision-making, and a matrix representation of the coupling between sensor data streams. We also develop a fusion algorithm that utilizes these representations.
Original language | English (US) |
---|---|
Article number | 22 |
Pages (from-to) | 201-210 |
Number of pages | 10 |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 5813 |
DOIs | |
State | Published - 2005 |
Event | Multisensor, Multisource Information Fusion: Architecture, Algorithms, and Applications 2005 - Orlando, FL, United States Duration: Mar 30 2005 → Mar 31 2005 |
Keywords
- Activity detection
- Audio and visual surveillance
- Multimodal sensor fusion
- Scene recognition
ASJC Scopus subject areas
- Electronic, Optical and Magnetic Materials
- Condensed Matter Physics
- Computer Science Applications
- Applied Mathematics
- Electrical and Electronic Engineering