Deep Learning in Digital Pathology

Research in the course of a PhD thesis, by Adrien Foucart.

The ICPR 2012 mitosis detection competition - usually called MITOS12 - proved that Deep Learning was the way to go for mitosis detection, and was influential in introducing Deep Learning into the world of Digital Pathology. It was also a flawed challenge, and there is a lot to learn from the mistakes that were made in its design. To understand those mistakes, we first need to look at what the goal of a "computer vision challenge" is.

1. How do we know if a computer vision algorithm is better than another?

Let's say you are a researcher who just developped a new algorithm for recognizing the species of a bird from a photograph in a natural setting. Now, you want to publish your research. In order to convince reviewers that your algorithm is interesting, however, you not only have to prove that it works, but you also have to prove that it improves on the current state-of-the-art.

If you are not a big fan of ethics, here's what you do: you find a set of test images where your algorithm works really, really well. Then, you implement other algorithm, but without caring too much if you do it exactly right. You test all algorithms on your test set and, surprise, you are the best! This may be very helpful in getting published, but of course it also doesn't tell us anything useful.

This is of course an exageration... but not so far from the truth. Even if you do care for ethics, it's very hard to implement other people's methods - especially if they didn't publish the code - and to find the best parameters for every method on your dataset. A very popular way to solve this problem is to use benchmarks and challenges.

The idea of the benchmark is fairly simple. Let's take our "bird recognition" problem: at one point in time, someone publishes a large collection of annotated bird images. Everyone who works on the problem of "bird recognition" can test their algorithm on the same data, which means that you can directly compare your method to what others have published. Challenges are similar, except that there is usually a "time limit" component: someone publishes a bird dataset, and tell everyone interested to submit their method before a given date. Then, everyone is evaluated at the same time (ideally on previously unreleased test images), and the results are published. This ensures a certain fairness in the comparison, as everyone plays by the same rules.

While many challenges only attract a few participants and are quickly forgotten, some have become true references by the computer vision community. The PASCAL Visual Object Classes challenge, for instance, ran between 2005 and 2012, with researchers tasked with recognizing objects from up to 20 different classes (see figure below). Starting in 2010, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC, often simply referred to as "ImageNet") has become the reference for "generic computer vision task".

Classes from the PASCAL VOC challenge
Classes from the PASCAL VOC challenge (source)

Many Deep Learning algorithms have become famous through their ImageNet performances, such as the previously mentioned AlexNet [1] (winner in 2012), Google's "Inception" network [2] (winner in 2014), or Microsoft's "ResNet" [3] (winner in 2015).

Challenges entered the world of digital pathology around 2012. Quoting Geert Litjens (emphasis mine):

The introduction of grand challenges in digital pathology has fostered the development of computerized digital pathology techniques. The challenges that evaluated existing and new approaches for analysis of digital pathology images are: EM segmentation challenge 2012 for the 2D segmentation of neuronal processes, mitosis detection challenges in ICPR 2012 and AMIDA 2013, GLAS for gland segmentation and, CAMELYON16 and TUPAC for processing breast cancer tissue samples.

G. Litjens et al [4]

MITOS12 [5] was one of the first computer vision contest with a specific digital pathology task. Let's take a closer look at it.

2. MITOS12: The Challenge

First of all, why do we want to count mitosis? I'll let the experts explain (emphasis mine):

Mitotic count is an important parameter in breast cancer grading as it gives an evaluation of the aggressiveness of the tumor. However, consistency, reproducibility and agreement on mitotic count for the same slide can vary largely among pathologists. An automatic tool for this task may help for reaching a better consistency, and at the same time reducing the burden of this demanding task for pathologists.

L. Roux et al [5]
MITOS12 example region
There are 3 mitosis in this image from the MITOS12 dataset, according to the pathologist who annotated it. Good luck finding them! (click on the image for a full-scale version)

The following process was used to create the challenge:

  1. A team of the pathology department of the Pitié-Salpêtrière Hospital selected five slides stained with H&E, each coming from a different patient.
  2. Each slide was scanned with three different scanners (Aperio, Hamamatsu and a multispectral microscope).
  3. One pathologist manually annotated, in 50 regions (10 per slides), all mitotic cells. There were 326 annotated mitosis in the dataset. An example of a full region is shown above.
  4. The 50 regions were split into a "training set" of 35 regions and a "test set" of 15 regions.
  5. The training set (images and annotations) was released in November 2011, and different teams started developping and training their algorithms.
  6. The test set (images only) was provided in August 2012, and participating teams sent their predictions
  7. All participants were evaluated using the following metric:
    • Count the number of "True Positive" (TP), meaning the number of mitotic cells that were correctly predicted by the algorithm.
    • Count the number of "False Negatives" (FN), meaning the number of mitotic cells that were not detected by the algorithm.
    • Count the number of "False Positive" (FP), meaning the number of non-mitotic cells that were incorrectly detected as mitosis by the algorithm.
    • Compute the F1 Score, defined as \(2 TP \over 2 TP + FN + FP\). The F1 score is a very common metric for classification tasks, which gives equal importance to the "precision" (are "positive predictions" accurate?) and "recall" (do we find all "positive examples"?).
  8. The organizer published a ranking of the algorithms according to this metric.

The MITOS12 results were computed separately on the images from the three different scanners. Most teams only submitted results for the "Aperio" scanner. The best results on that dataset were achieved by Dan Cireşan's team, IDSIA [6]. Their algorithm correctly detected 70 of the 100 mitosis in the test set, with only 9 false detections, for a winning F1 score of 0.7821, the runner-ups achieving scores of 0.7184 and 0.7094.

3. What is wrong with those results?

There are numerous problems with the challenge design, which were actually acknowledged by the authors in this paragraph from the Discussion of their article:

An improved version of this successful challenge will involve a much larger number of mitosis, images from more slides and multiple pathologists’ collaborative/cooperative annotations. Besides, some slides will be dedicated to test only without any HPF of these slides included in the training data set.

L. Roux et al [5]

There are four different issues here, so let's take them one at a time.

1) The small number of mitosis, with only around 200 mitosis in the training set, is certainly a problem. As mitosis can have a very variable appearance, it is unlikely that all of the possible "morphologies" of mitotic cells will be represented in the dataset. Machine Learning algorithm will therefore not be able to detect them.

2) The small number of slides is even more problematic. More importantly, a small number of slides from a small number of patients. Having more patients mean more diversity, and less risk of a bias in the dataset. For instance, if a patient happens has more mitosis than the others (because she has a more malignant cancer), and also happens to have some independent morphological characteristics in her breast tissue, it is very likely that the algorithm will pick up those independent features and use them to predict the presence of mitotic cells, even if those features would be completely meaningless for other patients.

3) The fact that only one pathologist annotated the slides is also worrying. In the introduction to the challenge, they list as a reason for needing automatic counting the lack of inter-pathologist agreement on mitotic count, citing a study by Christopher Malon and his colleagues [7]. In that study, three different pathologists were asked to classify 4204 nuclei as mitotic or non-mitotic. Pathologists were allowed to put "Maybe" as an answer. Even excluding those "maybe" from the comparison (and therefore comparing only cases where both pathologists were "reasonably certain" of their choice), the agreement between two pathologists was at most 93.5% of "same classification", and in the worst case at 64.7%. That same study also compares each pathologist against the "majority label". The F1 score of a pathologist against the majority, using the data from that study, vary between 0.704 and 0.997. In the MITOS12 challenge results, the top 3 teams are all within that range.

Now that doesn't mean that those three teams are necessarily better than, or even as good as pathologists. It's not fair to compare them on different datasets. But the point is, the difference in performance between the top teams in the challenge is small enough that a different annotator might have led to a completely different ranking.

4) The last problem acknowledged by Roux's publication is the worse one, at least in terms of methodology: they didn't split the training set and the test set properly. Ideally, when we test an algorithm, we want to make sure that it is capable of handling new cases. The best way to do that if to have a test set as independent as possible from the training set.

In biomedical images, that typically means: testing on other slides, taken from other patients, if possible taken with another acquisition machine. Changing more variables mean that the results are a lot more meaningfull. The best algorithms would then have to be those that really learned ways to describe the object of interest - in this case the mitosis. In her master thesis in our lab, Elisabeth Gruwé tested the same algorithm using either the "official" training-set / test-set split from the contest, and then using a "correct" split, by putting one patient aside for testing and training on the four others. The results on the official split were close to those of the three winning teams (0.68), the results on the correct split were significantly worse (0.54).

4. Does it matter?

Does the ranking matter? In terms of visibility, probably. The methods proposed by challenge winners tend to be copied, modified, adapted, and become the norm, while runner-ups may be completely ignored... even if their results are functionally equivalent. If we look at the publications of the three MITOS12 winning methods, a certain trend is visible. The winner, Cireşan's team's article [6], was published in the biggest biomedical imagining conference and has been cited more than a thousand times (according to Google Scholar). The runner-up, Humayun Irshad from the University of Grenoble, was published in a good journal [8] and has been cited about 150 times. The third, Ashkan Tashk and a team from Shiraz University of Technology, was published in the proceedings of an obscure iranian conference [9] and cited about 30 times.

Now the rankings are not the only explanation for this difference in visibility, and the number of citation is not a direct reflection of the influence of a paper. Dan Cireşan was part of a well-established research team with Jurgen Schmidhuber, a Deep Learning pioneer. Humayun Irshad's thesis director was Ludovic Roux, the organizer of the challenge, which is kind of a problem of its own, but ensured that he got some visibility in the follow-up articles. Ashkan Tashk and the iranian team certainly didn't have the same recognition beforehand - or after.

Two years later, an extended version of the challenge was proposed at the ICPR 2014 conference, MITOS-ATYPIA 14. It provided more data, and the annotations were made by two different pathologist, with a third one looking at all cases were the first two were in disagreement. The data included a confidence score for each mitosis based on the agreement or disagreement of the pathologists... and it was correctly split at the patient level. In 2016, Hao Chen and his Hong Kong team published their results on both the 2012 and 2014 datasets [10]. On the 2012 dataset, they achieve a F1 score of 0.788, "beating" Cireşan's entry. They also beat all other existing publications on the 2014 dataset... with a score of 0.482. Comparing the two datasets, they say:

One of the most difficult challenges in [the 2014] dataset is the variability of tissue appearance, mostly resulted from the different conditions during the tissue acquisition process. As a result, the dataset is much more challenging than that in 2012 ICPR.

H. Chen et al [10]

But from a purely machine learning perspective, this doesn't sound quite right. Yes, the increased variability in the test set is more challenging, but the increased variability in the training set should help the algorithms. The huge drop in performance is likely to be in a large part due to the incorrect setup of the 2012 challenge. The 2014 edition, however, attracted a lot less participants, and didn't get the same visibility... probably because the results were, for obvious reasons, a lot worse.

References

  1. [] A. Krizhevsky, I. Sutskever, G. Hinton (2012) "ImageNet Classification with Deep Convolutional Neural Networks", Communications of the ACM, 60(6), DOI: 10.1145/3065386
  2. [] C. Szegedy et al (2015) "Going deeper with convolutions", CVPR 2015, DOI: 10.1109/CVPR.2015.7298594
  3. [] K. He, X. Zhang, S. Ren, J. Sun (2015) "Deep residual learning for image recognition", CVPR 2016, DOI: 10.1109/CVPR.2016.90
  4. [] G. Litjens et al. (2017) "A survey on deep learning in medical image analysis", Medical Image Analysis, 42, 60-88, DOI: 10.1016/j.media.2017.07.005
  5. [] L. Roux et al. (2013) "Mitosis detection in breast cancer histological images An ICPR 2012 contest", Journal of Pathology Informatics, 4, 1, DOI: 10.4103/2153-3539.112693
  6. [] D. Cireşan, A. Giusti, L. Gambardella, J. Schmidhuber (2013) "Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks", Proceedings of MICCAI 2013 in Lecture Notes in Computer Science, 8150, 411-418, DOI: 10.1007/978-3-642-40763-5_51
  7. [] C. Malon et al (2012) "Mitotic figure recognition: agreement among pathologists and computerized detector", Analytical Cellular Pathology , 35(2), 97-100, DOI: 10.3233/ACP-2011-0029
  8. [] H. Irshad (2013) "Automated mitosis detection in histopathology using morphological and multi-channel statistics features", Journal of Pathology Informatics, 4: 10, DOI: 10.4103/2153-3539.112695
  9. [] A. Tashk, M. Helfroush, H. Danyali, M. Akbarzadeh (2013) "An automatic mitosis detection method for breast cancer histopathology slide images based on objective and pixel-wise textural features classification", Proceedings of CIKT, DOI: 10.1109/IKT.2013.6620101
  10. [] H. Chen, Q. Dou, X. Wang, J. Qin, P. Heng (2016) "Mitosis Detection in Breast Cancer Histology Images via Deep Cascaded Networks.", AAAI Conference on Artificial Intelligence, AAAI Publications

1. Who was first?

In 2017, Geert Litjens and his colleagues from Radboud University published a comprehensive survey of deep learning methods in medical image analysis [1]. It's possible that they missed some early papers that could qualify as "Deep Learning": as I've written before, the boundaries of what is or isn't Deep Learning are unclear, and articles written before 2012 are unlikely to use that terminology. If such articles exist, however, they have been forgotten in the large pile of never-cited research that fails to be picked-up by Google Scholar, Scopus, or other large research databases. Litjens' survey therefore remains the reference on the matter.

So who was first to use "Deep Learning in Digital Pathology"? The turning point seems to come from the MICCAI 2013 conference in Nagoya, with two pioneering articles: Angel Cruz-Roa's automated skin cancer detection [2], and Dan Cireşan's mitosis detection in breast cancer [3].

Cruz-Roa uses H&E stained images from skin tissue, to try to automatically determine if there is a malignant cancer in the sample. As illustrated in the figure below, the difference between cancer and non-cancer is based on morphological criteria which are very difficult to define.

Example of BCC histopathology
Example of H&E images from cancer and non-cancer skin tissue, taken from [2]

Their algorithm produces both a classification (cancer or not) at the image level, and what they call a "digital staining", which is basically a probability map of where the cancer regions are (see figure below). This is a very important feature for machine learning methods in biomedical imaging, related to the concept of interpretability. A machine learning algorithm which only produces a "diagnosis", but is unable to "explain" how this diagnosis came to be, cannot be trusted. I will most certainly come back to that idea later: the "reasoning" that machine learning algorithms (deep or not) make are sometimes more a reflexion of biases and artefacts in the data that was used to train it than of an understanding of the pathology. Having an output which includes an explanation of the diagnosis is therefore essential to control whether any "weird stuff" is happening.

Output of Cruz-Roa's algorithm
Output of Cruz-Roa's algorithm, taken from [2]

Cireşan also uses H&E images, taken from breast cancer tissue. I'll let him introduce the problem:

Mitosis detection is very hard.

Cireşan, Giusti, Gambardella & Schmidhuber [3]

Mitosis - the process of cell multiplication - is a relatively rare event, meaning that, in the images which are available to train the algorithm, only a very small fraction will be part of a nuclei, and an even smaller fraction will be part of a nuclei going through mitosis. In addition to being rare, the appearance of the cell nucleus will be very different depending of which stage of the mitosis the cell is currently experiencing. To get an idea of how difficult the task is, we can just look at these examples from the article (click on the image for a larger version):

Mitosis and non-mitosis examples
Examples of mitosic and non-mitosic cells, with the "mitosis probability" given by Cireşan's algorithm and the "true classification", taken from [3]

Cireşan's results were far from perfect, but they were impressive enough to be a milestone in the domain: as a winner of the "ICPR 2012 mitosis detection competition", it got a lot of attention... despite the many methodogical issues with the competition itself, which is the topic for another post.

2. Invasion of the Deep Learners

By showing that Deep Learning was a way to get good results on digital pathology tasks, Cireşan and Cruz-Roa opened up the floodgates. Litjens lists many different applications in the subsequent years: bacterial colony counting, classification of mitochondria, classification and grading of different types of cancer, detection of metastases... Mostly on H&E images, but also sometimes using immunohistochemistry, Deep Learning invaded the domain.

A few highly influential works that I would like to mention here, and that I will probably write about more later:

  • In 2015, Olaf Ronneberger, Philipp Fischer and Thomas Brox introduced the U-Net network architecture [4], winning challenges in cell segmentation and cell tracking at the ISBI 2015 conference. This particular architecture is now probably the most widely used in biomedical imaging.
  • In 2016, Andrew Janowczyk and Anant Madabhushi published their Deep Learning for digital pathology tutorial [5], a very practical article on how to approach various use cases in digital pathology, with well-annotated datasets that they also made public. For reasearchers in the field, this is probably one of the best available resource to get started.
  • In 2017, Korsuk Sirinukunwattana published the results of the "GlaS" gland segmentation in colon histology images challenge [6]. This challenge was influential in two ways: first, by providing a high-quality dataset of colon histology images with very accurate annotations, and second by demonstrating how much Deep Learning had penetrated the field of digital pathology. Of the 6 methods deemed good enough to be described in the results article, 5 used deep learning approaches.

3. Looking forward

After these pioneering works, the future of the field may seem a little dull. If we have deep neural network that work for most digital pathology tasks... what is there left to do?

Fortunately, finding a good "deep learning network" is only a part of the "digital pathology pipeline". Everything that comes around the network - from the constitution of the datasets to the way the results are evaluated - is often more important to the final result. That is going to be a large part of what I will write about in future posts. Questions surrounding how data from challenges and data from real-world application may differ, questions about the way we evaluate algorithms, about how we declare winners and losers in ways that may not always reflect how useful the algorithms really are. For that, a good starting point will be to take a closer look at the aforementioned MITOS12 challenge from ICPR 2012.

References

  • [] G. Litjens et al. (2017) "A survey on deep learning in medical image analysis", Medical Image Analysis, 42, 60-88, DOI: 10.1016/j.media.2017.07.005
  • [] A. Cruz-Roa, J. Ovalle, A. Madabhushi, F. Osorio (2013) "A Deep Learning Architecture for Image Representation, Visual Interpretability and Automated Basal-Cell Carcinoma Cancer Detection", Proceedings of MICCAI 2013 in Lecture Notes in Computer Science, 8150, 403-410, DOI: 10.1007/978-3-642-40763-5_50
  • [] D. Cireşan, A. Giusti, L. Gambardella, J. Schmidhuber (2013) "Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks", Proceedings of MICCAI 2013 in Lecture Notes in Computer Science, 8150, 411-418, DOI: 10.1007/978-3-642-40763-5_51
  • [] O. Ronneberger, P. Fischer, T. Brox (2015) "U-Net: Convolutional Networks for Biomedical Image Segmentation", Proceedings of MICCAI 2015 in Lecture Notes in Computer Science, 9351, 234-241, DOI: 10.1007/978-3-319-24574-4_28
  • [] A. Janowczyk, A. Madabhushi (2016) "Deep learning for digital pathology image analysis: a comprehensive tutorial with selected use cases", Journal of Pathology Informatics, 7, 29, DOI: 10.4103/2153-3539.186902
  • [] K. Sirinukunwattana et al (2017) "Gland Segmentation in Colon Histology Images: The GlaS Challenge Contest", Medical Image Analysis, 35, 489-502, DOI: 10.1016/j.media.2016.08.008

1. Defining Deep Learning

It seems like Deep Learning should have an easy, clear-cut definition. Yet... Wikipedia, on this topic, displays a remarkable example of circular citation - or Citogenesis, as the always-relevant XKCD would put it. The Wikipedia definition is a "summary" of five definitions from a Microsoft Research paper, most of which are themselves taken from earlier versions of the same Wikipedia article.

The most direct definition from a reputable source that I could find is probably from the "Deep Learning" Nature Review of AI-superstars Yann LeCun, Yoshua Bengio and Geoffrey Hinton (emphasis mine):

Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.

LeCun, Bengio & Hinton [1]

In a slightly more convoluted way, Ian Goodfellow, Yoshua Bengio (again) and Aaron Courville, in their "Deep Learning" book, introduce the topic this way (emphasis mine):

The true challenge to artificial intelligence proved to be solving the tasks that are easy for people to perform but hard for people to describe formally—problems that we solve intuitively, that feel automatic, like recognizing spoken words or faces in images. (...) A solution is to allow computers to learn from experience and understand the world in terms of a hierarchy of concepts, with each concept defined through its relation to simpler concepts.

Goodfellow, Bengio & Courville [2]

These definitions basically boil down to: it's AI, with machine learning, with layers. So... What's machine learning? And, while we're at it, what is AI?

NVIDIA AI Timeline
Timeline of AI, Machine Learning and Deep Learning, by NVIDIA. [source]

2. What is Artificial Intelligence?

The history of Artificial Intelligence as a practical, computer-science-related field of research, goes back to the early days of computers themselves, around and right after World War II. The "ultimate goal" of AI, as illustrated in Alan Turing's best-known paper [3], was to create a computer which could be - at least in specific conditions - indistinguishable from a human.

This, unsurprisingly, is a difficult task. In fact, this particular goal, which is now generally referred to as "Artificial General Intelligence", is still mostly the domain of science-fiction.

One of the early avenues of research in AI, in the tradition of "trying to replicate human intelligence", was the artificial neuron. As early as 1943, McCulloch and Pitts [4] proposed a way to represent neurons in a mathematical model which could be replicated on a computer. They were followed by many others, but while their research was interesting, it proved to be largely impractical. Neural networks, quite simply, did not work. Artificial (General) Intelligence seemed altogether impossible. If AI as it was conceived couldn't be done, the next best thing was to change the definition of AI to something more forgiving.

In "Artificial Intelligence: foundations of computational agents" [5], Poole & Mackworth propose such a definition. AI, in their view, studies computational agents (which are agents whose actions and decisions can be implemented in a physical device, like a computer) that act intelligently, which means that it has actions appropriate for its circumstances and its goals, is flexible to changing environments and changing goals, and learns from experience.

This places any artificial intelligence in the context of task or problem solving. The job of an AI is not to be "like a human", but to have "human-like" (or better-than-human) capabilities in one or several specific tasks.

An interesting aspect of AI as defined by Poole & Mackworth is the capacity to learn from experience. Machine Learning is a subset of AI dealing with this particular aspect of "intelligence": how can a machine learn from experience?

3. Machine Learning

To understand the basic idea of Machine Learning, it's interesting to look at one of its earliest algorithms, "Nearest Neighbor", with a version described already in 1951 [6].

Let's take an example. Imagine that you are a Big Streaming Company, and you want your AI to decide which movies or series in your catalog you should recommend to a particular user. Let's assume that you actually want to recommend something that the user will like, and not just something that you want to promote. For every movie or tv series, you have two pieces of information: how much violence, and how much comedy there is, as a percentage of the total runtime of the movie. For everything that the user has already seen, you also know (don't ask how) if he liked it or not.

You could therefore represent all of the movies that the user has seen on a nice graph, like this:

Very important movie data
How do you know if you will like the next movie?

What you do know is that you take the movie that you want to recommend, and you also put it on the same graph. Then, you check if the closest point is something that the user liked or not. If not, you don't recommend the movie.

Obviously, this is a ridiculous example, but the main idea is there: the algorithm uses past experience to predict new behaviour. Of course, this will work a lot better if you have more data, and if you have better ways of describing this data.

There are many, many, more complex, more accurate algorithms in Machine Learning. But in the end, they fundamentally do the exact same thing: put all of the data (the "past experience") into some space that describes is as best as possible, and then find in that space a "Rule" that best predicts what to do with any future event. This rule can usually be as simple as a straight line (or to be precise in the more-than-two-dimensions case, an hyperplane) dividing the "space" in two, or be an intricate function with millions of parameters able to model any boundary shape, as in modern artificial neural networks.

4. Deep Learning and the neural networks comeback

While diverse machine learning algorithms such as Decision Trees, Support Vector Machines, and many others were being developped, the "artificial neural network" world was not completely inactive. One of the major issues with neural networks - how to efficiently "train" them with new examples - was vastly improved upon in 1986 with the backpropagation algorithm [7], which is still used today. In 1989, Yann LeCun and his team used it in what is now considered one of the first practical application of "modern" artificial neural networks, to recognize hand-written ZIP codes for the US Postal system [8].

In 1997, IBM's Deep Blue beat world champion Gary Kasparov in a six-game chess match [9]. In terms of public perception, this certainly gave AI enthusiasts a boost. Deep Blue, however, was an "Intelligence" only if you used the most forgiving definition. It didn't learn anything, it didn't reason anything: it was a pure, brute-force mechanism. Deep Blue simply took the current situation of the game, and computed all possible outcomes for all possible moves, for the next 10 to 20 moves. It did use "previous experience", in the form of thousands of previously played human-vs-human grandmaster games, to determine what a "winning move" was. But in the end, it mostly relied on the fact that chess is a game with fixed rules and a finite amount of outcomes. It worked because it had more processing power than what was previously available, not because it was innovative.

LeCun's success put neural networks back on the map, but they were still a curiosity. In most applications, they were impractical, took way too long to train, and didn't usually perform better than other machine learning approaches. But with the 21st century came two game changers in the machine learning world: Big Data and fast GPUs. Big Data - the ability to store huge amount of data on everything, thanks to cheap hard drives - gave us the ability to improve machine learning in general. Fast GPUs made training larger, more useful neural networks a reality. Quoting Dan Cireşan and his colleagues in 2010:

All we need to achieve this best result so far are many hidden layers, many neurons per layer, numerous deformed training images, and graphics cards to greatly speed up learning.

Cireşan, Meier, Gambardella & Schmidhuber [10]

Around the same time, the idea that "larger" neural networks were actually "deeper" neural networks, with neurons organized in "layers", each layer connected to the next starting from the raw data all the way up to the output, became common usage.

By 2012, the achievements of "Deep" neural networks became impossible to ignore. On ImageNet, the largest visual object recognition challenge, Alex Krizhesvsky's AlexNet [11] dominated that year's field. Deep Learning approaches have since then consistently beaten "classical" machine learning methods on about everything. Most predominantly, it has become the standard solution for computer vision and language processing. In the world of AI, Deep Learning is now the law of the land.

Conclusion?

To summarize:

  • Artificial Intelligence describes any (artificial) system that take (intelligent) actions based on its environment.
  • Machine Learning describes a subset of AI which use past events (the "learning" dataset) to create a model of "how to react to new events" (the "decision rule").
  • Deep Learning describes a subset of Machine Learning where the model is learned in a "layered" manner, with simple rules learned from the raw data, which feed into more complex rules, which feed into more complex rules, etc, etc, until you get to more abstract concepts. To take a computer vision example: from pixels, to shapes and colors, to eyes and noses, to recognizable faces.

These definitions are fuzzy. The boundaries between Deep Learning and "non-deep" Machine Learning are unclear, as are sometimes the boundaries between Machine Learning and "old fashioned AI". That's fine: we don't need every method to fit into a well-defined box.

All right. Now that we have defined what Digital Pathology and Deep Learning are, the next question will be: how has Deep Learning been applied to Digital Pathology?

References

  • [] Y. LeCun, Y. Bengio, G. Hinton (2015) "Deep Learning", Nature, 521, 436-444, DOI: 10.1038/nature14539
  • [] I. Goodfellow, Y. Bengio, A. Courville (2016) "Deep Learning", MIT Press https://www.deeplearningbook.org/
  • [] A. Turing (1950) "Computer machinery and intelligence", MIND, 59, 433-60, DOI: 10.1093/mind/LIX.236.433
  • [] W. McCulloch, W. Pitts (1943). "A logical calculus of the ideas immanent in nervous activity." Bulletin of Mathematical Biophysics, 7, 115–133.
  • [] D. Poole, A. Mackworth (2017) "Computational Intelligence: foundations of computational agents (Second Edition)", available online
  • [] E. Fix, J. Hodges (1951) "Discriminatory Analysis - Nonparametric Discrimination: Consistency Properties", USAF School of Aviation Medicine - University of California, Berkeley, available online
  • [] D. Rumelhart, G. Hinton, R. Williams (1986) "Learning representations by back-propagating errors.", Nature, 323, pp 533-536, DOI: 10.1038/323533a0
  • [] Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, et al. (1989) "Back-propagation applied to handwritten zip code recognition.", Neural Computation, 1(4), pp 541-551, DOI: 10.1162/neco.1989.1.4.541
  • [] Deep Blue (chess computer) on Wikipedia, seen on Feb 12th, 2020.
  • [] D. Cireşan, U. Meier, L. Gambardella, J. Schmidhuber (2010) "Deep Big Simple Neural Nets Excel on Hand-written Digit Recognition", Neural Computation, 22(12), pp 3207-3220, DOI: 10.1162/NECO_a_00052
  • [] A. Krizhevsky, I. Sutskever, G. Hinton (2012) "ImageNet Classification with Deep Convolutional Neural Networks", Communications of the ACM, 60(6), DOI: 10.1145/3065386

Let's start from the beginning. When I started my thesis, the topic we settled on was "Deep Learning in Digital Pathology". It's vague - but that was kind of the point. Deep Learning and Digital Pathology were both recent trends at the time, so trying to look at what could be done with it in general seemed like a good idea.

We have two parts in "Deep Learning in Digital Pathology". The first one, Deep Learning, is where I have spent most of my thesis. That's the part that concerns me as a biomedical engineer specialized in image analysis, and where I can contribute the most. The second part, however, is just as important. It's the application, what we want to use Deep Learning for: Digital Pathology. I am certainly not an expert in Digital Pathology - and even less of an expert in not-digital histopathology - but understanding what the methods I'm going to develop may be used for does seem like a good idea, so let's briefly get into it.

1. Histopathology

The goal of histopathology is to examine human tissue (usually taken from a tumour or some other possibly diseased area) under a microscope, to formulate a diagnosis or to get a better understanding of a disease. The process, in short, is as follows:

  1. Cut a small bit of tissue from someone's body (typically during surgery, or a biopsy)
  2. "Fix" it in formalin (or freeze it) so the tissue keeps it shape, and then embed it in a paraffin block.
  3. Cut the block in very thin slices using a microtome, which is basically a very small and precise meat slicer.
  4. Stain the tissue to make whatever we are looking for more visible.
  5. Have a trained pathologist examine the resulting slide under a microscope to get a diagnosis.

Why do we have to stain the tissue? Because cells are mostly water, water tends to be transparent [citation needed], and transparent things are hard to look at with visible light. Fortunately, some chemical pigments have properties which are very useful for pathologists. For instance, in the late 19th and early 20th century, it was discovered that we could use hematoxylin to stain the nuclei of cells in blue, and eosin to stain the cytoplasm in pink [1].

This produces "Hematoxylin & Eosin" - H&E - images like this one below, where the structure of the tissue is easy to analyse for the pathologist:

H&E stained image.
H&E stained image, from [2].

Slightly more recently, we realized that we could use the properties of antigens and antibodies to get some more specific staining. The idea is this: in our body, some cells - antibodies - are designed to specifically bind to some proteins - antigens - as a defence mechanism to produce an immune response. We can "hack" this process by binding a staining agent to an antigen and therefore "highlight", in the tissue, places where the related antibodies are present. This method is called immunohistochemistry, or IHC. For instance, in the image below, we have the same part of the tissue stained with H&E on one side, and with an IHC marker (anti-pan-cytokeratin, to be precise) on the other. The IHC marker highlights the cells which are part of a tumour, which is a rather useful information to have in histopathology.

(left) H&E versus (right) IHC-stained images, from colorectal cancer tissue.
(left) H&E versus (right) IHC-stained images, from colorectal cancer tissue.

2. Digital pathology

So where does the "digital" part fit in all this?

The problem with the process above is that it requires the trained pathologist to be physically in front of the microscope, with the slide in it. There are a number of drawbacks to this. One is that it's hard to get a second opinion from a specialist from somewhere else. Another is that comparing a tissue to, for instance, another sample taken some months or years before requires finding the physical slide in the archives of the hospital.

How do we solve that? With digital scanners. Very expensive, very high resolution, very precise digital scanners. The entire slide can be scanned at multiple levels of magnification to produce Gigapixel images, which can be viewed on a computer. The pathologist can then access the image in a "virtual microscope".

And once you start to have the slides as digital objects, you open the door to many possibilities, from tools to easily annotate the slides (for instance, for teaching purposes, or simply to quickly document the reasoning behind a diagnosis) to automated analysis of certain aspects of the tissue. In particular, some quantitative analysis are very hard to do for a human expert in an objective manner (like evaluating "what percentage of the tissue shows this marker?"), yet relatively easy (or at least possible) to do for a well-designed algorithm.

Digital image acquisition are becoming commonplace, and associated image analysis solutions are viewed by most as the next critical step in automated histological analysis.

Laoighse Mulrane [3]

In the more than 10 years since Mulrane's paper, there has indeed been a wide range of image analysis applications in digital pathology. And, in more recent years, as in most image analysis applications, one type of strategy seems to have very quickly surpassed all others: Deep Learning.

I guess that's a teaser for the "next episode"?

References

  • [] Michael Titford (2009) "Progress in the Development of Microscopical Techniques for Diagnostic Pathology", Journal of Histotechnology, 32:1, 9-19, DOI: 10.1179/his.2009.32.1.9
  • [] K. Sirinukunwattana et al (2017), "Gland segmentation in colon histology images: The glas challenge contest", Med. Image Anal., vol. 35, pp. 489–502. DOI: 10.1016/j.media.2016.08.008
  • [] L. Mulrane et al (2008), "Automated image analysis in histopathology: a valuable tool in medical diagnostics", Expert Rev. Mol. Diagno., 8 (6), pp. 707-725. DOI: 10.1586/14737159.8.6.707