Combining Sentinel-1 and Sentinel-2 Satellite Image Time Series for land cover mapping via a multi-source deep learning architecture

The huge amount of data currently produced by modern Earth Observation (EO) missions has allowed for the design of advanced machine learning techniques able to support complex Land Use/Land Cover (LULC) mapping tasks. The Copernicus programme developed by the European Space Agency provides, with missions such as Sentinel-1 (S1) and Sentinel-2 (S2), radar and optical (multi-spectral) imagery, respectively, at 10 m spatial resolution with revisit time around 5 days. Such high temporal resolution allows to collect Satellite Image Time Series (SITS) that support a plethora of Earth surface monitoring tasks. How to effectively combine the complementary information provided by such sensors remains an open problem in the remote sensing field. In this work, we propose a deep learning architecture to combine information coming from S1 and S2 time series, namely TWINNS (TWIn Neural Networks for Sentinel data), able to discover spatial and temporal dependencies in both types of SITS. The proposed architecture is devised to boost the land cover classification task by leveraging two levels of complementarity, i.e., the interplay between radar and optical SITS as well as the synergy between spatial and temporal dependencies. Experiments carried out on two study sites characterized by different land cover characteristics (i.e., the Koumbia site in Burkina Faso and Reunion Island, a overseas department of France in the Indian Ocean), demonstrate the significance of our proposal.

Saved in:
Bibliographic Details
Main Authors: Ienco, Dino, Interdonato, Roberto, Gaetano, Raffaele, Ho Tong Minh, Dinh
Format: article biblioteca
Language:eng
Subjects:U30 - Méthodes de recherche, U10 - Informatique, mathématiques et statistiques, P31 - Levés et cartographie des sols, cartographie de l'occupation du sol, analyse de séries chronologiques, imagerie, imagerie par satellite, Observation satellitaire, http://aims.fao.org/aos/agrovoc/c_9000094, http://aims.fao.org/aos/agrovoc/c_28778, http://aims.fao.org/aos/agrovoc/c_36760, http://aims.fao.org/aos/agrovoc/c_36761, http://aims.fao.org/aos/agrovoc/c_9000182, http://aims.fao.org/aos/agrovoc/c_6543, http://aims.fao.org/aos/agrovoc/c_8081, http://aims.fao.org/aos/agrovoc/c_3081,
Online Access:http://agritrop.cirad.fr/597770/
http://agritrop.cirad.fr/597770/7/ID597770.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The huge amount of data currently produced by modern Earth Observation (EO) missions has allowed for the design of advanced machine learning techniques able to support complex Land Use/Land Cover (LULC) mapping tasks. The Copernicus programme developed by the European Space Agency provides, with missions such as Sentinel-1 (S1) and Sentinel-2 (S2), radar and optical (multi-spectral) imagery, respectively, at 10 m spatial resolution with revisit time around 5 days. Such high temporal resolution allows to collect Satellite Image Time Series (SITS) that support a plethora of Earth surface monitoring tasks. How to effectively combine the complementary information provided by such sensors remains an open problem in the remote sensing field. In this work, we propose a deep learning architecture to combine information coming from S1 and S2 time series, namely TWINNS (TWIn Neural Networks for Sentinel data), able to discover spatial and temporal dependencies in both types of SITS. The proposed architecture is devised to boost the land cover classification task by leveraging two levels of complementarity, i.e., the interplay between radar and optical SITS as well as the synergy between spatial and temporal dependencies. Experiments carried out on two study sites characterized by different land cover characteristics (i.e., the Koumbia site in Burkina Faso and Reunion Island, a overseas department of France in the Indian Ocean), demonstrate the significance of our proposal.