Fusion of UAV-Acquired Visible Images and Multispectral Data by Applying Machine-Learning Methods in Crop Classification

The sustainable development of agriculture is closely related to the adoption of precision agriculture techniques, and accurate crop classification is a fundamental aspect of this approach. This study explores the application of machine learning techniques to crop classification by integrating RGB images and multispectral data acquired by UAVs. The study focused on five crops: rice, soybean, red bean, wheat, and corn. To improve classification accuracy, the researchers extracted three key feature sets: band values and vegetation indices, texture features extracted from a grey-scale co-occurrence matrix, and shape features. These features were combined with five machine learning models: random forest (RF), support vector machine (SVM), k-nearest neighbour (KNN) based, classification and regression tree (CART) and artificial neural network (ANN). The results show that the Random Forest model consistently outperforms the other models, with an overall accuracy (OA) of over 97% and a significantly higher Kappa coefficient. Fusion of RGB images and multispectral data improved the accuracy by 1–4% compared to using a single data source. Our feature importance analysis showed that band values and vegetation indices had the greatest impact on classification results. This study provides a comprehensive analysis from feature extraction to model evaluation, identifying the optimal combination of features to improve crop classification and providing valuable insights for advancing precision agriculture through data fusion and machine learning techniques.

Saved in:
Bibliographic Details
Main Authors: Zheng, Zuojun, Yuan, Jianghao, Yao, Wei, Kwan, Paul, Yao, Hongxun, Liu, Qingzhi, Guo, Leifeng
Format: Article/Letter to editor biblioteca
Language:English
Subjects:RGB imagery, crop classification, data fusion, drone remote sensing, precision agriculture, random forest algorithm,
Online Access:https://research.wur.nl/en/publications/fusion-of-uav-acquired-visible-images-and-multispectral-data-by-a
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The sustainable development of agriculture is closely related to the adoption of precision agriculture techniques, and accurate crop classification is a fundamental aspect of this approach. This study explores the application of machine learning techniques to crop classification by integrating RGB images and multispectral data acquired by UAVs. The study focused on five crops: rice, soybean, red bean, wheat, and corn. To improve classification accuracy, the researchers extracted three key feature sets: band values and vegetation indices, texture features extracted from a grey-scale co-occurrence matrix, and shape features. These features were combined with five machine learning models: random forest (RF), support vector machine (SVM), k-nearest neighbour (KNN) based, classification and regression tree (CART) and artificial neural network (ANN). The results show that the Random Forest model consistently outperforms the other models, with an overall accuracy (OA) of over 97% and a significantly higher Kappa coefficient. Fusion of RGB images and multispectral data improved the accuracy by 1–4% compared to using a single data source. Our feature importance analysis showed that band values and vegetation indices had the greatest impact on classification results. This study provides a comprehensive analysis from feature extraction to model evaluation, identifying the optimal combination of features to improve crop classification and providing valuable insights for advancing precision agriculture through data fusion and machine learning techniques.