SegVeg: Segmenting rgb images into green and senescent vegetation by combining deep and shallow methods

Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest. We have developed the SegVeg approach for semantic segmentation of RGB images into three classes (background, green, and senescent vegetation). This is achieved in two steps: A U-net model is first trained on a very large dataset to separate whole vegetation from background. The green and senescent vegetation pixels are then separated using SVM, a shallow machine learning technique, trained over a selection of pixels extracted from images. The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks. Results show that the SegVeg approach allows to segment accurately the three classes. However, some confusion is observed mainly between the background and senescent vegetation, particularly over the dark and bright regions of the images. The U-net model achieves similar performances, with slight degradation over the green vegetation: the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net. The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent. Finally, the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels. Results show that green fraction is very well estimated (R2=0.94) by the SegVeg model, while the senescent and background fractions show slightly degraded performances (R2=0.70 and 0.73, respectively) with a mean 95% confidence error interval of 2.7% and 2.1% for the senescent vegetation and background, versus 1% for green vegetation. We have made SegVeg publicly available as a ready-to-use script and model, along with the entire annotated grid-pixels dataset. We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or, at least, offering a pretrained model for more specific use.

Saved in:
Bibliographic Details
Main Authors: Serouart, Mario, Madec, Simon, David, Etienne, Velumani, Kaaviya, Lopez-Lozano, Paul, Weiss, Marie, Baret, Frédéric
Format: article biblioteca
Language:eng
Subjects:U10 - Informatique, mathématiques et statistiques, F40 - Écologie végétale,
Online Access:http://agritrop.cirad.fr/603155/
http://agritrop.cirad.fr/603155/1/SegVeg.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!