Cutchan, Marvin Mc; Comber, Alexis J.; Giannopoulos, Ioannis; Canestrini, Manuela
In: Remote Sensing, vol. 13, no. 16, 2021, ISSN: 2072-4292.
The classification of land use and land cover (LULC) is a well-studied task within the domain of remote sensing and geographic information science. It traditionally relies on remotely sensed imagery and therefore models land cover classes with respect to their electromagnetic reflectances, aggregated in pixels. This paper introduces a methodology which enables the inclusion of geographical object semantics (from vector data) into the LULC classification procedure. As such, information on the types of geographic objects (e.g., Shop, Church, Peak, etc.) can improve LULC classification accuracy. In this paper, we demonstrate how semantics can be fused with imagery to classify LULC. Three experiments were performed to explore and highlight the impact and potential of semantics for this task. In each experiment CORINE LULC data was used as a ground truth and predicted using imagery from Sentinel-2 and semantics from LinkedGeoData using deep learning. Our results reveal that LULC can be classified from semantics only and that fusing semantics with imageryâ€”Semantic Boostingâ€”improved the classification with significantly higher LULC accuracies. The results show that some LULC classes are better predicted using only semantics, others with just imagery, and importantly much of the improvement was due to the ability to separate similar land use classes. A number of key considerations are discussed.