Mapping floods from remote sensing data and quantifying the effects of surface obstruction by clouds and vegetation

Apoorva Shastry, Elizabeth Carter, Brian Coltin, Rachel Sleeter, Scott McMichael, Jack Eggleston

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Floods are one of the most devastating natural calamities affecting millions of people and causing damage all around the globe. Flood models and remote sensing imagery are often used to predict and understand flooding. An increasing number of earth observation satellites are producing data at a rate that far outpaces our ability to manually extract meaningful information from it, motivating a surge in research on automatic feature detection in satellite imagery using machine learning and deep learning algorithms to automate flood mapping so that information from large streams of data can be extracted in near-real time and used for disaster response at landscape scale. The development of such an algorithm is predicated on exposure to training datasets that are representative of the full range of diversity in the spatial and spectral signature of surface water as it is sampled by space-based instruments. To address these needs, we developed a semantically labeled dataset of high-resolution multispectral imagery (Maxar WorldView 2/3) strategically sampled to be representative of North American surface water variability along five spatiotemporal strata: latitude, topographic complexity, land use, and day of year. This dataset was utilized to train a convolutional neural network (CNN) to automatically detect inundation extents using the Deep Earth Learning, Tools, and Analysis (DELTA) framework, an open source TensorFlow/Keras interpreter for satellite imagery. Our research objective was to demonstrate the out-of-sample accuracy of our trained CNN at landscape scale. The model performed well, with 98% precision and 94% recall for the water class during validation. We then evaluated the accuracy of our satellite-derived flood maps from trained machine learning model against a hydraulic model. For this, we compared predicted inundation extents against the USGS Flood Inundation Mapping (FIM) Program's flood map library at 17 different locations, where the FIM library provides flood inundation extents based on hydraulic models built for river reaches and corresponding to stage measurements at a nearby USGS gaging site. Compared to the hydraulic model, we estimated the underprediction of flood inundation by optical remote sensing data in our areas of interest to be 62%. We used land use data from National Land Cover Database (NLCD) and cloud masks to estimate that 79% of underprediction was due to these obstructions, with 74% belonging to vegetation, 9% to clouds, and 4% to both. A significant amount of inundation is missed when only optical remote sensing data is considered, and we suggest the use of flood models along with remote sensing data for getting the most realistic flood inundation extents.

Original languageEnglish (US)
Article number113556
JournalRemote Sensing of Environment
Volume291
DOIs
StatePublished - Jun 1 2023

Keywords

  • Deep learning
  • Flood mapping
  • Hydraulic models
  • Machine learning
  • Remote sensing

ASJC Scopus subject areas

  • Soil Science
  • Geology
  • Computers in Earth Sciences

Fingerprint

Dive into the research topics of 'Mapping floods from remote sensing data and quantifying the effects of surface obstruction by clouds and vegetation'. Together they form a unique fingerprint.

Cite this