thumbnail

Topic

Technologies and technical equipment for agriculture and food industry

Volume

Volume 78 / No. 1 / 2026

Pages : 257

Metrics

Volume viewed 0 times

Volume downloaded 0 times

PEANUT BUD ORIENTATION DETECTION METHOD BASED ON FSL-YOLO

基于FSL-YOLO的花生芽向检测方法

DOI : https://doi.org/10.35633/inmateh-78-20

Authors

Zhenghao LI

School of Agricultural Engineering and Food Science, Shandong University of Technology

MEI WANG

Shandong Academy of Agricultural Machinery Sciences

Chunwang DONG

Tea Research Institute of Shandong Academy of Agricultural Sciences

Huawei YANG

Shandong Academy of Agricultural Machinery Sciences

Zhiwei CHEN

Tea Research Institute of Shandong Academy of Agricultural Sciences

(*) Yulong CHEN

School of Agricultural Engineering and Food Science, Shandong University of Technology

(*) Corresponding authors:

cyl06471@sdut.edu.cn |

Yulong CHEN

Abstract

Peanut sprout orientation detection is a critical step in achieving automated production. However, the small target size and dense distribution of peanut sprouts in plug trays impose higher requirements on the model’s ability to extract features from small objects and discriminate in densely populated scenes. In addition, the limited computational resources of embedded devices restrict the deployment of complex models. To address these challenges, this study proposes a lightweight peanut sprout orientation detection model named FSL-YOLO, based on YOLOv8. The proposed model introduces improvements in four main aspects. First, a Fast-CA module, integrating Coordinate Attention with FasterNet, is incorporated into the backbone network to enhance the perception of dense small targets while reducing the number of parameters and computational cost. Second, a lightweight downsampling module (LWDS) is designed to replace traditional convolution operations, further improving detection performance. Third, spatial and channel reconstruction convolution (SCConv) is introduced into the neck network to optimize the C2f module, thereby enhancing feature representation capability and model robustness. Fourth, an efficient lightweight detection head, Detect-L, is constructed to further reduce the model size. Experimental results demonstrate that FSL-YOLO achieves both high accuracy and lightweight performance. The model attains an mAP50 of 96.1%, representing a 2.4% improvement over the original YOLOv8, while reducing floating-point operations (FLOPs) by 51.9% and the number of parameters (Params) by 50%. These results indicate that the proposed model effectively balances detection accuracy and computational efficiency, providing a solid technical foundation for the implementation of automated peanut sprout production systems.

Abstract in Chinese

花生芽朝向检测是实现其自动化生产的关键环节。然而,穴盘环境中的花生芽具有目标尺寸小、分布密集的特点,对模型的小目标特征提取与密集场景判别能力提出了更高要求,同时受限的设备算力也制约了复杂模型的部署。为此,本研究提出了一种基于YOLOv8的轻量级花生芽朝向检测模型FSL-YOLO。该模型主要从四个方面进行改进:一是在主干网络中引入融合Coordinate Attention与FasterNet的Fast-CA模块,以增强对密集小目标的感知并降低参数量;二是设计轻量级下采样模块(LWDS)替代传统卷积,进一步提高检测精度;三是构建高效轻量检测头Detect-L,进一步压缩模型规模。实验结果表明,FSL-YOLO在精度与轻量化方面均表现优异:mAP50达到95.8%,较原YOLOv8提升2.1%;浮点运算量(FLOPs)降低46.9%,参数量(Params)减少43.3%。该模型兼具高精度与低复杂度的优势,为花生芽自动化生产系统的实现提供了技术基础。


Indexed in

Clarivate Analytics.
 Emerging Sources Citation Index
Scopus/Elsevier
Google Scholar
Crossref
Road