In order to gain a better understanding of disturbances (i.e., anthropogenic pressure) in our environment, researchers have worked on methods for the mapping of wilderness areas given their crucial role in providing native habitat for many species, which are often endangered. In this work, we formulate the wilderness mapping task as a supervised learning problem. We focus on the joint use of potentially complementary features provided by multi-modal input data. Until now, the individual influences of different input modalities on the decision of a deep neural network have largely remained unclear. Therefore, we develop a framework for the modality-level interpretation of multi-modal Earth observation data in an end-to-end fashion. While leveraging an explainable machine learning method, namely Occlusion Sensitivity Maps, the proposed framework investigates the influence of modalities in an early-fusion setting, i.e. the modalities are fused before the learning process. With respect to the application, our results indicate that auxiliary data such as land cover and nighttime light data are important sources for the accurate classification of wilderness areas and the influence of a modality increases with the increasing number of spectral channels.
«In order to gain a better understanding of disturbances (i.e., anthropogenic pressure) in our environment, researchers have worked on methods for the mapping of wilderness areas given their crucial role in providing native habitat for many species, which are often endangered. In this work, we formulate the wilderness mapping task as a supervised learning problem. We focus on the joint use of potentially complementary features provided by multi-modal input data. Until now, the individual influenc...
»