VISION-BASED TOMATO RIPENESS DETECTION USING DIGITAL IMAGE PROCESSING
डिजिटल इमेज प्रोसेसिंग का उपयोग करके टमाटर के पकने का दृष्टि-आधारित पता लगाना
DOI : https://doi.org/10.35633/inmateh-78-18
Authors
Abstract
Tomatoes (Solanum lycopersicum) are not only a staple in cuisines worldwide but also a subject of scientific interest due to their health benefits and distinct ripening process. Recognizing the ripest and most flavorful tomatoes has led to innovative research combining technology and agriculture. In this context, image processing emerges as a promising tool to discern the quality of tomatoes, particularly through color analysis. This study explores the effectiveness of a region-based image processing system in identifying red, ripe tomatoes. Currently, this process is done by hand, which takes time and can lead to mistakes-developed a machine learning-based device that utilizes computer vision and image processing techniques to detect ripe tomatoes with high accuracy. By employing algorithms that analyze color, texture, and shape, our technology can identify the optimal harvest time, making the process faster, more efficient, and more cost-effective. Automating tomato harvesting is crucial to addressing the labor crisis and enhancing the effectiveness of the present harvesting process. The actualization of automated harvesting depends on the ability to precisely recognize fruits. Fruit that is harvested at its peak maturity has the maximum levels of taste, vitamins, and sale value, which optimizes financial gains. There is now an inadequate rate of identification and failure to identify because of the blockage of specific fruits by vegetation and unwanted fruits, as well as the color change brought on by light. In order to identify tomato fruits in difficult circumstances, this research suggests a tomato identification system using the enhanced YOLOv8 framework. According to the model's test evaluation, the YOLOv8-Tomato model's mAP0.5 was 86.9%, its recall rate was 98%, and its accuracy and precision were 94% and 90%, respectively.
Abstract in Hindi



