IMPROVED YOLOV8N-BASED DETECTION OF GRAPES IN ORCHARDS
基于改进YOLOV8N的果园葡萄检测方法
DOI : https://doi.org/10.35633/inmateh-74-42
Authors
(*) Corresponding authors:
Abstract
To address the issues of low detection accuracy, slow speed, and large parameter size in detecting fresh table grapes in natural orchard environments, this study proposes an improved grape detection model based on YOLOv8n, termed YOLOGPnet. The model replaces the C2f module with a Squeeze-and-Excitation Network V2 (SENetV2) to enhance gradient flow through more branched cross-layer connections, thereby improving detection accuracy. Additionally, the Spatial Pyramid Pooling with Enhanced Local Attention Network (SPPELAN) substitutes the SPPF module, enhancing its ability to capture multi-scale information of the target fruits. The introduction of the Focaler-IoU loss function, along with different weight adjustment mechanisms, further improves the precision of bounding box regression in object detection. After comparing with multiple algorithms, the experimental results show that YOLOGPnet achieves an accuracy of 93.6% and [email protected] of 96.8%, which represents an improvement of 3.5 and 1.6 percentage points over the baseline model YOLOv8n, respectively. The model's computational load, parameter count, and weight file size are 6.8 Gflops, 2.1 M, and 4.36 MB, respectively. The detection time per image is 12.5 ms, showing reductions of 21.84%, 33.13%, 30.79%, and 25.60% compared to YOLOv8n. Additionally, comparisons with YOLOv5n and YOLOv7-tiny in the same parameters reveal accuracy improvements of 0.7% and 1.9%, respectively, with other parameters also showing varying degrees of enhancement. This study offers a solution for accurate and rapid detection of table grapes in natural orchard environments for intelligent grape harvesting equipment.
Abstract in Chinese