thumbnail

Topic

Technologies and technical equipment for agriculture and food industry

Volume

Volume 64 / No. 2 / 2021

Pages : 335-344

Metrics

Volume viewed 30 times

Volume downloaded 31 times

EXTRACTION METHOD FOR CENTERLINES OF RICE SEEDINGS BASED ON FAST-SCNN SEMANTIC SEGMENTATION

基于Fast-SCNN语义分割的秧苗列中心线提取方法

DOI : https://doi.org/10.35633/inmateh-64-33

Authors

Chen Yusong

Soochow University

(*) Geng Changxing

Soochow University

Wang Yong

Soochow University

Shen Renyuan

Soochow University

Zhu Guofeng

Soochow University

(*) Corresponding authors:

[email protected] |

Geng Changxing

Abstract

For the extraction of paddy rice seedling center line, this study proposed a method based on Fast-SCNN (Fast Segmentation Convolutional Neural Network) semantic segmentation network. By training the FAST-SCNN network, the optimal model was selected to separate the seedling from the picture. Feature points were extracted using the FAST (Features from Accelerated Segment Test) corner detection algorithm after the pre-processing of original images. All the outer contours of the segmentation results were extracted, and feature point classification was carried out based on the extracted outer contour. For each class of points, Hough transformation based on known points was used to fit the seedling row center line. It has been verified by experiments that this algorithm has high robustness in each period within three weeks after transplanting. In a 1280×1024-pixel PNG format color image, the accuracy of this algorithm is 95.9% and the average time of each frame is 158ms, which meets the real-time requirement of visual navigation in paddy field.

Abstract in English

为提取水田秧苗中心线,提出一种基于Fast-SCNN (Fast Segmentation Convolutional Neural Network)语义分割网络的秧苗列中心线提取方法。通过训练Fast-SCNN网络,选取最优模型,将秧苗从图像中分割出来。将原始图像预处理后,采用Fast(Features from Accelerated Segment Test)角点检测算法提取特征点。提取分割结果的所有外轮廓,基于提取到的外轮廓进行特征点分类,对于分类后的每类特征点,采用基于已知点的Hough变换拟合秧苗列中心线。经试验验证,本文算法在插秧后三周内各个时段,在杂草、缺株、反光等干扰时,均有较高的鲁棒性。处理每幅1280*1024像素的PNG格式彩色图像平均耗时158ms,提取中心线的准确率为95.9%,满足水田视觉导航的实时性要求。

Indexed in

Clarivate Analytics.
 Emerging Sources Citation Index
Scopus/Elsevier
Google Scholar
Crossref
Road