Dilated residual U-Net for vegetation detection from high resolution drone aerial imagery
Abstract
Vegetation plays a vital role in regulating air quality and mitigating climate change by converting carbon dioxide into oxygen. However, ongoing human activity continues to degrade vegetation ecosystems, necessitating scalable and accurate monitoring methods. Traditional field-based statistical approaches are often costly and inefficient. This study proposes a deep learning model, dilated residual U-Net, for semantic segmentation of vegetation from drone-acquired aerial imagery. The model incorporates residual connections to reduce infor mation loss and dilated convolutions to enhance receptive field coverage with out increasing computational cost. Experiments conducted on the DroneDeploy Segmentation dataset demonstrate that the proposed model achieves a Dice co efficient of 0.4451 with an inference speed of 0.0675 seconds per image, outper forming baseline U-Net and Residual U-Net models. These results highlight the potential of lightweight, CNN-based architectures for environmental monitoring in resource-constrained settings.
Keywords
Computer vision; Deep learning; Machine learning; Remote sensing; Semantic segmentation
Full Text:
PDFDOI: http://doi.org/10.11591/ijeecs.v42.i1.pp115-122
Refbacks
- There are currently no refbacks.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Indonesian Journal of Electrical Engineering and Computer Science (IJEECS)
p-ISSN: 2502-4752, e-ISSN: 2502-4760
This journal is published by the Institute of Advanced Engineering and Science (IAES).