Page Header

Effective Data Augmentation and Training Techniques for Improving Deep Learning in Plant Leaf Disease Recognition

Prem Enkvetchakul, Olarik Surinta

Abstract


Plant disease is the most common problem in agriculture. Usually, the symptoms appear on leaves of the plants which allow farmers to diagnose and prevent the disease from spreading to other areas. An accurate and consistent plant disease recognition system can help prevent the spread of diseases and save maintenance costs. In this research, we present a plant leaf disease recognition system using two deep convolutional neural networks (CNNs); MobileNetV2 and NasNetMobile. These CNN architectures are designed to be suitable for smartphones due to the models being small. We have experimented on training techniques; online, offline, and mixed training techniques on two plant leaf diseases. As for data augmentation techniques, we found that the combination of rotation, shift, and zoom techniques significantly increases the performance of the CNN architectures. The experimental results show that the most accurate algorithm for plant leaf disease recognition is NASNetMobile architecture using transfer learning. Additionally, the most accurate result is obtained when combining the offline training technique with data augmentation techniques.

Keywords



[1] J. Zhang, Y. Xie, Q. Wu, and Y. Xia, “Medical image classification using synergic deep learning,” Medical Image Analysis, vol. 54, pp. 10–19, May 2019, doi: 10.1016/J.MEDIA.2019.02.010.

[2] A. S. Lundervold and A. Lundervold, “An overview of deep learning in medical imaging focusing on MRI,” Zeitschrift für Medizinische Physik, vol. 29, no. 2, pp. 102–127, May 2019, doi: 10.1016/J.ZEMEDI.2018.11.002.

[3] R. J. Chalakkal, W. H. Abdulla, and S. S. Thulaseedharan, “Quality and content analysis of fundus images using deep learning,” Computers in Biology and Medicine, vol. 108, pp. 317–331, May 2019, doi: 10.1016/j.compbiomed.2019.03.019.

[4] M. Talo, U. B. Baloglu, Ö. Yıldırım, and U. R. Acharya, “Application of deep transfer learning for automated brain abnormality classification using MR images,” Cognitive Systems Research, vol. 54, pp. 176–188, May 2019, doi: 10.1016/J. COGSYS.2018.12.007.

[5] S. Javadi and S. A. Mirroshandel, “A novel deep learning method for automatic assessment of human sperm images,” Computers in Biology and Medicine, vol. 109, pp. 182–194, Jun. 2019, doi: 10.1016/j.compbiomed.2019.04.030.

[6] Y. Lyu, J. Chen, and Z. Song, “Image-based process monitoring using deep learning framework,” Chemometrics and Intelligent Laboratory Systems, vol. 189, pp. 8–17, Jun. 2019, doi: 10.1016/J.CHEMOLAB.2019.03.008.

[7] S. Zhou, W. Sheng, Z. Wang, W. Yao, H. Huang, Y. Wei, and R. Li, “Quick image analysis of concrete pore structure based on deep learning,” Construction and Building Materials, vol. 208, pp. 144–157, May 2019, doi: 10.1016/J. CONBUILDMAT.2019.03.006.

[8] A. Kamilaris and F. X. Prenafeta-Boldú, “Deep learning in agriculture: A survey,” Computers and Electronics in Agriculture, vol. 147, pp. 70– 90, Apr. 2018, doi: 10.1016/J.COMPAG.2018.02. 016.

[9] S. Chen, B. Li, J. Cao, and B. Mao, “Research on agricultural environment prediction based on deep learning,” Procedia Computer Science, vol. 139, pp. 33–40, Jan. 2018, doi: 10.1016/j. procs.2018.10.214.

[10] S. Zhang, W. Huang, and C. Zhang, “Threechannel convolutional neural networks for vegetable leaf disease recognition,” Cognitive Systems Research, vol. 53, pp. 31–41, Jan. 2019, doi: 10.1016/J.COGSYS.2018.04.006.

[11] Y. Lu, S. Yi, N. Zeng, Y. Liu, and Y. Zhang, “Identification of rice diseases using deep convolutional neural networks,” Neurocomputing, vol. 267, pp. 378–384, Dec. 2017, doi: 10.1016/J. NEUCOM.2017.06.023.

[12] S. Sladojevic, M. Arsenovic, A. Anderla, D. Culibrk, and D. Stefanovic, “Deep neural networks based recognition of plant diseases by leaf image classification,” Computational Intelligence and Neuroscience, pp. 1–11, 2016, doi: 10.1155/ 2016/3289801.

[13] A. Ramcharan, K. Baranowski, P. McCloskey, B. Ahmed, J. Legg, and D. P. Hughes, “Deep learning for image-based cassava disease detection,” Frontiers in Plant Science, vol. 8, no. 2, pp. 1–7, Oct. 2017, doi: 10.3389/fpls.2017.01852.

[14] T. DeVries and G. W. Taylor, “Improved regularization of convolutional neural networks with cutout,” arXiv, pp. 1–8, Nov. 2017, doi: arXiv:1708.04552.

[15] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “Mixup: Beyond empirical risk minimization,” in International Conference on Learning Representations (ICLR 2018), pp. 1–8, Apr. 2018, doi: arXiv:1710.09412.

[16] P. Pawara, E. Okafor, O. Surinta, L. Schomaker, and M. Wiering, “Comparing local descriptors and bags of visualwords to deep convolutional neural networks for plant recognition,” in 6th International Conference on Pattern Recognition Applications and Methods (ICPRAM), 2017, pp. 479–486, doi: 10.5220/0006196204790486.

[17] Y. Sun, Y. Liu, G. Wang, and H. Zhang, “Deep learning for plant identification in natural environment,” Computational Intelligence and Neuroscience, vol. 2017, pp. 1–6, May 2017, doi: 10.1155/2017/7361042.

[18] L. Taylor and G. Nitschke, “Improving deep learning using generic data augmentation,” in IEEE Conference on Symposium Series on Computational Intelligence (SSCI), 2018, pp. 1542–1547, [Online]. Available: http://arxiv.org/ abs/1708.06020.

[19] C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no. 60, pp. 1–48, Dec. 2019, doi: 10.1186/s40537-019-0197-0.

[20] A. Mikołajczyk and M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” in International Interdisciplinary PhD Workshop (IIPhDW), 2018, pp. 117–122, doi: 10.1109/IIPHDW.2018.8388338.

[21] P. Pawara, E. Okafor, L. Schomaker, and M. Wiering, “Data augmentation for plant classification,” in International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS), 2017, pp. 615–626, doi: 10.1007/978- 3-319-70353-4_52.

[22] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in 3rd International Conference on Learning Representations (ICLR), Sep. 2015, pp. 1–14.

[23] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Dec. 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90.

[24] G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2261–2269, doi: 10.1109/CVPR.2017.243.

[25] H. Y. Chen and C. Y. Su, “An enhanced hybrid MobileNet,” in International Conference on Awareness Science and Technology (iCAST), 2018, pp. 308–312, doi: 10.1109/ICAwST.2018. 8517177.

[26] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Dec. 2018, pp. 4510–4520, doi: 10.1109/CVPR.2018.00474.

[27] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” in the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8697–8710, doi: 10.1109/CVPR.2018.00907.

[28] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv, pp. 1–9, 2017, doi: arXiv:1704.04861.

[29] K. C. Kamal, Z. Yin, M. Wu, and Z. Wu, “Depthwise separable convolution architectures for plant disease classification,” Computers and Electronics in Agriculture, vol. 165, p. 104948, Oct. 2019, doi: 10.1016/j.compag.2019.104948.

[30] B. Zoph and Q. V. Le, “Neural architecture search with reinforcement learning,” arXiv, pp. 1–16, Nov. 2017, doi: arXiv:1611.01578.

[31] E. Mwebaze, T. Gebru, A. Frome, S. Nsumba, and J. Tusubira, “iCassava 2019 fine-grained visual categorization challenge,” arXiv, pp. 4321–4326, 2019, doi: arXiv:1908.02900.

[32] A. K. Reyes, J. C. Caicedo, and J. E. Camargo, “Fine-tuning deep convolutional networks for plant recognition,” in Working notes of CLEF, 2015, vol. 1391, pp. 1–9.

[33] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, “AutoAugment: Learning augmentation strategies from data,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), May 2019, pp. 113–123, doi: 10.1109/ CVPR.2019.00020.
[34] Y. Jing, Y. Yang, Z. Feng, J. Ye, Y. Yu, and M. Song, “Neural style transfer: A review,” in IEEE Transactions on Visualization and Computer Graphics (TVCG), Jun. 2019, pp. 1–25, doi: 10.1109/tvcg.2019.2921336.

Full Text: PDF

DOI: 10.14416/j.asep.2021.01.003

Refbacks

  • There are currently no refbacks.