Segmenting pathology by delineating anomalous regions in medical images is a critical, yet tedious task in drug trials and clinical practice. Approaches using Artificial Intelligence provide an automated alternative, but still require vast amounts of manual samples for training.
We propose a novel, disease-agnostic setup that learns segmentation by itself, i.e. without a need for providing manual sample segmentations – solely given exemplary images of healthy and diseased subjects. In this way, clinical experts are relieved of this laborious activity.
While domain transfer has previously been used to highlight disease effects in medical images, to the best of our knowledge, no one has tried to distinguish explicitly between (1) what visually characterizes pathology and (2) where it occurs, as is done by our approach.
In the best case, our setup will render manual segmen-tations obsolete, not only for training but also for actual application of the framework. Potentially, its results may even help finding new visual manifestations of pathology that have previously been overlooked.
We hypothesize that detecting manifestations of pathology can be learned given two unpaired sets of images, one showing healthy and one showing diseased subjects, by automatically finding the visual clues that make pathological regions stand out.
In particular, we aim at modeling the difference between the domains of healthy and pathological images: We will train an artificial neural network to transfer an image from the pathological to the healthy domain by (1) “filling in” and (2) marking the regions that need to be altered to make the generated result appear healthy.
We plan to train and evaluate our approach on different types of pathology, using publicly available data. Manual segmentations will only be used for evaluation, using quantitative measures such as F1 score and Hausdorff distance. Preliminary results on tumors are encouraging.