Impact of real-world annotations on the training and evaluation of deep learning algorithms in digital pathology

By Adrien Foucart. This is the web version of my PhD dissertation, defended in October, 2022. It can alternatively be downloaded as a PDF (original manuscript). A (re-)recording of the public defense presentation can also be viewed on YouTube (32:09).
Cite as: Foucart, A. (2022). Impact of real-world annotations on the training and evaluation of deep learning algorithms in digital pathology (PhD Thesis). Université libre de Bruxelles, Brussels.
[Show / Hide Table of Contents for the chapter]

A. Description of the datasets

Several datasets, mostly coming from digital pathology competitions, are used throughout this thesis for our experiments and analysis. We provide here a description of their main characteristics, for reference.

A.1 MITOS12

Figure A.1. Acquisition and annotation process for the MITOS12 dataset, from the challenge’s website.

A.2 GlaS 2015

Figure A.2. Example of images and annotated glands from the GlaS challenge training set.

A.3 Janowczyk’s epithelium dataset

Figure A.3. Example of images and annotated epithelium regions from Janowczyk’s epithelium dataset.

A.4 Gleason 2019

Figure A.4. Image and individual pathologist annotations of a core from the Gleason 2019 challenge. Source of the image: https://gleason2019.grand-challenge.org/

A.5 MoNuSAC 2020

Figure A.5. Image patch (left) and annotations (rights) from a kidney tissue slide of the MoNuSAC dataset. In the annotations, epithelial nuclei are in red, lymphocytes in yellow, neutrophils in blue, macrophages in green, and boundaries are highlighted in brown.

A.6 Artefact dataset

Figure A.6. Annotated slide from the artefact training set, with imprecise delineation and many unlabelled artefacts, including blurry regions and smaller tears. Image reproduced from [13].

References

[1] L. Roux et al., “Mitosis detection in breast cancer histological images An ICPR 2012 contest,” Journal of Pathology Informatics, vol. 4, no. 1, p. 8, 2013, doi: 10.4103/2153-3539.112693.

[2] K. Sirinukunwattana, J. P. W. Pluim, H. Chen, and Others, “Gland segmentation in colon histology images: The glas challenge contest,” Medical Image Analysis, vol. 35, pp. 489–502, 2017, doi: 10.1016/j.media.2016.08.008.

[3] A. Janowczyk and A. Madabhushi, “Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases,” Journal of Pathology Informatics, vol. 7, no. 1, 2016, doi: 10.4103/2153-3539.186902.

[4] Y. Qiu et al., “Automatic Prostate Gleason Grading Using Pyramid Semantic Parsing Network in Digital Histopathology,” Frontiers in Oncology, vol. 12, Apr. 2022, doi: 10.3389/fonc.2022.772403.

[5] D. Karimi, G. Nir, L. Fazli, P. C. Black, L. Goldenberg, and S. E. Salcudean, “Deep Learning-Based Gleason Grading of Prostate Cancer from Histopathology Images - Role of Multiscale Decision Aggregation and Data Augmentation,” IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 5, pp. 1413–1426, 2020, doi: 10.1109/JBHI.2019.2944643.

[6] D. Karimi, G. Nir, L. Fazli, P. C. Black, L. Goldenberg, and S. E. Salcudean, “Deep Learning-Based Gleason Grading of Prostate Cancer From Histopathology Images—Role of Multiscale Decision Aggregation and Data Augmentation,” IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 5, pp. 1413–1426, May 2020, doi: 10.1109/JBHI.2019.2944643.

[7] G. Nir et al., “Comparison of Artificial Intelligence Techniques to Evaluate Performance of a Classifier for Automatic Grading of Prostate Cancer From Digitized Histopathologic Images,” JAMA Network Open, vol. 2, no. 3, p. e190442, Mar. 2019, doi: 10.1001/jamanetworkopen.2019.0442.

[8] G. Nir et al., “Automatic grading of prostate cancer in digitized histopathology images: Learning from multiple experts,” Medical Image Analysis, vol. 50, pp. 167–180, Dec. 2018, doi: 10.1016/j.media.2018.09.005.

[9] R. Verma et al., “MoNuSAC2020: A Multi-Organ Nuclei Segmentation and Classification Challenge,” IEEE Transactions on Medical Imaging, vol. 40, no. 12, pp. 3413–3423, Dec. 2021, doi: 10.1109/TMI.2021.3085712.

[10] A. Foucart, O. Debeir, and C. Decaestecker, “Comments on ‘MoNuSAC2020: A Multi-Organ Nuclei Segmentation and Classification Challenge,’” IEEE Transactions on Medical Imaging, vol. 41, no. 4, pp. 997–999, Apr. 2022, doi: 10.1109/TMI.2022.3156023.

[11] R. Verma, N. Kumar, A. Patil, N. C. Kurian, S. Rane, and A. Sethi, “Author’s Reply to ‘MoNuSAC2020: A Multi-Organ Nuclei Segmentation and Classification Challenge,’” IEEE Transactions on Medical Imaging, vol. 41, no. 4, pp. 1000–1003, Apr. 2022, doi: 10.1109/TMI.2022.3157048.

[12] A. Foucart, O. Debeir, and C. Decaestecker, “Artifact Identification in Digital Pathology from Weak and Noisy Supervision with Deep Residual Networks,” in 2018 4th International Conference on Cloud Computing Technologies and Applications (Cloudtech), Nov. 2018, pp. 1–6. doi: 10.1109/CloudTech.2018.8713350.

[13] A. Foucart, O. Debeir, and C. Decaestecker, “Snow Supervision in Digital Pathology: Managing Imperfect Annotations for Segmentation in Deep Learning,” 2020, doi: 10.21203/rs.3.rs-116512.

 


  1. https://github.com/ruchikaverma-iitg/MoNuSAC↩︎