PEANUT BUD ORIENTATION DETECTION METHOD BASED ON FSL-YOLO
基于FSL-YOLO的花生芽向检测方法
DOI : https://doi.org/10.35633/inmateh-78-20
Authors
Abstract
Peanut sprout orientation detection is a critical step in achieving automated production. However, the small target size and dense distribution of peanut sprouts in plug trays impose higher requirements on the model’s ability to extract features from small objects and discriminate in densely populated scenes. In addition, the limited computational resources of embedded devices restrict the deployment of complex models. To address these challenges, this study proposes a lightweight peanut sprout orientation detection model named FSL-YOLO, based on YOLOv8. The proposed model introduces improvements in four main aspects. First, a Fast-CA module, integrating Coordinate Attention with FasterNet, is incorporated into the backbone network to enhance the perception of dense small targets while reducing the number of parameters and computational cost. Second, a lightweight downsampling module (LWDS) is designed to replace traditional convolution operations, further improving detection performance. Third, spatial and channel reconstruction convolution (SCConv) is introduced into the neck network to optimize the C2f module, thereby enhancing feature representation capability and model robustness. Fourth, an efficient lightweight detection head, Detect-L, is constructed to further reduce the model size. Experimental results demonstrate that FSL-YOLO achieves both high accuracy and lightweight performance. The model attains an mAP50 of 96.1%, representing a 2.4% improvement over the original YOLOv8, while reducing floating-point operations (FLOPs) by 51.9% and the number of parameters (Params) by 50%. These results indicate that the proposed model effectively balances detection accuracy and computational efficiency, providing a solid technical foundation for the implementation of automated peanut sprout production systems.
Abstract in Chinese



