[Reading] Top ACROBATs - pathology registration
Adrien Foucart, PhD in biomedical engineering.
This website is guaranteed 100% human-written, ad-free and tracker-free. Follow updates using the RSS Feed or by following me on Mastodon
Adrien Foucart, PhD in biomedical engineering.
This website is guaranteed 100% human-written, ad-free and tracker-free. Follow updates using the RSS Feed or by following me on Mastodon
In the reviews of medical image registration that I previously summarized, the most common methods were clearly intensity-based. This means that they use the pixel intensity to measure the similarity between the moving image and the target image. The big advantage of such methods is that they are dense: every pixel contributes to the overall similarity and helps find the best transform.
This, however, is not the only available option, and it’s not always the best option. A key issue with intensity-based method is that it will typically not work well if there are real alterations between the two images, alterations meaning here that the images are fundamentally different in some aspect, and not just “warping” of each other. This is typically the case in digital pathology registration, where we may be working with adjacent slices of tissue (so that the individual cells are not the same from one slice to the next, even though the overall structure will be very similar), and/or with different stains (e.g. the “generic” H&E stain for one, and more specific immunohistochemistry stains for the other).
This is probably why, in the recent Acrobat challenge, the top methods were rather feature-based. Let’s take a look at how they work.
The winner of the challenge was Christian Marzahl, from a company called Gestalt Diagnostics. I didn’t find a description of the method his team used in the challenge… but last year he published a “Robust Quad-Tree based Registration on Whole Slide Images” (Marzahl et al. 2021), so I’m going to assume his Acrobat entry was based on the same principles.
So what did they do?
They used a Quad-Tree approach to “recursively divide the WSI into image segments with successively higher resolution levels,” then to do “piece-wise affine approximation of any non-linear deformation,” using matching SIFT keypoints to determine the transformation matrix.
So, in slightly less technical terms:
Feature-based methods are therefore sparse: they only use a selection of keypoints to find the transform. The big advantage of using only keypoints is that regions where the tissue is damaged, or otherwise “too” different between the two images, will just not be taken into account in the computation of the transform. It’s much easier to discard outliers in this way. It’s also a lot less dependent on a “pre-registration” step. Even if the two images are badly misaligned or rotated, it’s not too difficult to find the “best transform” between sets of matching points.
The big disadvantage, of course, is that if the matching step is not good, the results are going to be useless.
The code from that publication is also available on GitHub: https://github.com/ChristianMarzahl/WsiRegistration, so it should be relatively easy to test on our images.
The runner-up of the challenge was the team behind VALIS (“Virtual Alignment of pathoLogy Image Series”), a digital pathology registration module. So let’s also take a look at their methods (Gatenbee et al. 2021).
While Marzahl et al. were concerned with pairs of slices, VALIS aims at registering a full stack of N adjacent slices. Their pipeline contains three main parts: a pre-processing module, a feature-based rigid registration module, and an intensity-based non-rigid registration refinement module.
Their pre-processing steps are:
This normalization step is further explained:
The normalization method is inspired by (Khan, Rajpoot, Treanor, & Magee, 2014), where first the 5th percentile, average, and 95th percentile of all pixel values is determined. These target values are then used as knots in cubic interpolation, and then the pixel values of each image are fit to the target values.
Their rigid registration steps are:
This step is done after the rigid registration. They use existing methods, which they don’t describe further:
VALIS can conduct this non-rigid registration using one of three methods: Deep Flow, SimpleElastix, or Groupwise SimpleElastix.
Their results tend to show that the improvement from the non-rigid transformation is relatively small.
It’s interesting to see that the two best methods on this particular challenge are relying on relatively “old” techniques with keypoints, descriptors, and “good old” image processing. It seems again that deep learning methods don’t show the same dominance for this task as they usually do in image analysis.
The good performance of feature-based methods is something to keep in mind, particularly when digital pathology is involved.