02980naa a2200361 a 450000100080000000500110000800800410001902400610006010000220012124501660014326000090030952017790031865000280209765000200212565000330214565000220217865300280220065300250222865300330225365300190228665300230230565300290232865300210235765300200237865300300239865300280242870000200245670000190247670000210249570000220251670000210253877300590255921343262021-09-14 2021 bl uuuu u00u1 u #d7 ahttps://doi.org/10.1016/j.biosystemseng.2021.08.0112DOI1 aGONÇALVES, J. P. aDeep learning architectures for semantic segmentation and automatic estimation of severity of foliar symptoms caused by diseases or pests.h[electronic resource] c2021 aColour-thresholding digital imaging methods are generally accurate for measuring the percentage of foliar area affected by disease or pests (severity), but they perform poorly when scene illumination and background are not uniform. In this study, six convolutional neural network (CNN) architectures were trained for semantic segmentation in images of individual leaves exhibiting necrotic lesions and/or yellowing, caused by the insect pest coffee leaf miner (CLM), and two fungal diseases: soybean rust (SBR) and wheat tan spot (WTS). All images were manually annotated for three classes: leaf background (B), healthy leaf (H) and injured leaf (I). Precision, recall, and Intersection over Union (IoU) metrics in the test image set were the highest for B, followed by H and I classes, regardless of the architecture. When the pixel-level predictions were used to calculate percent severity, Feature Pyramid Network (FPN), Unet and DeepLabv3+ (Xception) performed the best among the architectures: concordance coefficients were greater than 0.95, 0.96 and 0.98 for CLM, SBR and WTS datasets, respectively, when confronting predictions with the annotated severity. The other three architectures tended to misclassify healthy pixels as injured, leading to overestimation of severity. Results highlight the value of a CNN-based automatic segmentation method to determine the severity on images of foliar diseases obtained under challenging conditions of brightness and background. The accuracy levels of the severity estimated by the FPN, Unet and DeepLabv3 + (Xception) were similar to those obtained by a standard commercial software, which requires adjustment of segmentation parameters and removal of the complex background of the images, tasks that slow down the process. aArtificial intelligence aNeural networks aPlant diseases and disorders aDoença de Planta aAprendizado de máquina aAprendizado profundo aConvolutional neural network aFitopatometria aImage segmentation aInteligência artificial aMachine learning aPhytopathometry aRede neural convolucional aSegmentação de imagem1 aPINTO, F. A. C.1 aQUEIROZ, D. M.1 aVILLAR, F. M. M.1 aBARBEDO, J. G. A.1 aDEL PONTE, E. M. tBiosystems Engineeringgv. 210, p. 129-142, Oct. 2021.