Resolution-Performance Trade-off of Plant Disease Detectors using Convolutional Neural Networks

Thanaporn Patikorn, Saowalak Leelawongsarote, Saowalak Leelawongsarote, Arunee Chaisri, Arunee Chaisri, Namooy Panya, Namooy Panya

Abstract


Artificial intelligence has been utilized in various fields to drive modern innovations. For instance, in agriculture, computer vision has been employed to identify plant diseases by analyzing images of plant leaves. While state-of-the-art models can achieve high accuracy, their practical application is still limited. These models are specifically designed to detect diseases from close-up images of leaves. However, in reality, images of plants of interest, such as those captured by fertilizer spraying drones and security cameras, are often scenic and contain multiple trees, each with numerous leaves. Consequently, the extracted leaf images have significantly lower resolutions compared to the ones used in the models. In this study, we investigated the impact of this trade-off between the performance of plant disease detection models and input image resolution. The relationship between image dimensions and accuracy was investigated. The results showed that halving image width resulted in approximately 4.35% decrease in accuracy.


Full Text:

PDF

References


T. Bhuvaneswari and J. T. H. Yao, “Automated greenhouse,” in 2014 IEEE International Symposium on Robotics and Manufacturing Automation (ROMA), 2014, pp. 194–199.

M. E. De Kleine and M. Karkee, “A semi-automated harvesting prototype for shaking fruit tree limbs,” Transactions of the ASABE, vol. 58, no. 6, pp. 1461–1470, 2015.

P. Mhetre, D. Soni, A. Nerkar, and H. Vishal, “Agriculture drone for fertilizer spraying,” International Research Journal of Modernization in Engineering Technology and Science, vol. 2, no. 6, 2020.

S. P. Mohanty, D. Hughes, and M. Salathé, “Inference of plant diseases from leaf images through deep learning,” Front. Plant Sci, vol. 7, p. 1419, 2016.

D. Hughes, M. Salathé, and others, “An open access repository of images on plant health to enable the development of mobile disease diagnostics,” arXiv preprint arXiv:1511.08060, 2015.

H. Luo, C. Xiong, W. Fang, P. E. Love, B. Zhang, and X. Ouyang, “Convolutional neural networks: Computer vision-based workforce activity assessment in construction,” Automation in Construction, vol. 94, pp. 282–289, 2018.

M. Favorskaya and A. Pakhirka, “Animal species recognition in the wildlife based on muzzle and shape features using joint CNN,” Procedia Computer Science, vol. 159, pp. 933–942, 2019.

Y. Harjoseputro, I. Yuda, K. P. Danukusumo, and others, “MobileNets: Efficient convolutional neural network for identification of protected birds,” IJASEIT (International Journal on Advanced Science, Engineering and Information Technology), vol. 10, no. 6, pp. 2290–2296, 2020.

E. S. Olivas, J. D. M. Guerrero, M. Martinez-Sober, J. R. Magdalena-Benedito, L. Serrano, and others, Handbook of research on machine learning applications and trends: Algorithms, methods, and techniques: Algorithms, methods, and techniques. IGI global, 2009.

M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, 2019, pp. 6105–6114.

A. G. Howard et al., “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.

S. Woo et al., “Convnext v2: Co-designing and scaling convnets with masked autoencoders,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16133–16142.

X. Ying, “An overview of overfitting and its solutions,” in Journal of physics: Conference series, 2019, vol. 1168, p. 022022.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014


Refbacks

  • There are currently no refbacks.