Computer aided detection for breast lesion in ultrasound and mammography
Texto Completo
Compartir
In the field of breast cancer imaging, traditional Computer Aided Detection (CAD) systems were designed using limited computing resources and used scanned films (poor image quality), resulting in less robust application process. Currently, with the advancements in technologies, it is possible to perform 3D imaging and also acquire high quality Full-Field Digital Mammogram (FFDM).
Automated Breast Ultrasound (ABUS) has been proposed to produce a full 3D scan of the breast automatically with reduced operator dependency. When using ABUS, lesion segmentation and tracking changes over time are challenging tasks, as the 3D nature of the images make the analysis difficult and tedious for radiologists. One of the goals of this thesis is to develop a framework for breast lesion segmentation in ABUS volumes. The 3D lesion volume in combination with texture and contour analysis, could provide valuable information to assist radiologists in the diagnosis.
Although ABUS volumes are of great interest, x-ray mammography is still the gold standard imaging modality used for breast cancer screening due to its fast acquisition and cost-effectiveness. Moreover, with the advent of deep learning methods based on Convolutional Neural Network (CNN), the modern CAD Systems are able to learn automatically which imaging features are more relevant to perform a diagnosis, boosting the usefulness of these systems. One of the limitations of CNNs is that they require large training datasets, which are very limited in the field of medical imaging.
In this thesis, the issue of limited amount of dataset is addressed using two strategies: (i) by using image patches as inputs rather than full sized image, and (ii) use the concept of transfer learning, in which the knowledge obtained by training for one task is used for another related task (also known as domain adaptation). In this regard, firstly the CNN trained on a very large dataset of natural images is adapted to classify between mass and non-mass image patches in the Screen-Film Mammogram (SFM), and secondly the newly trained CNN model is adapted to detect masses in FFDM. The prospects of using transfer learning between natural images and FFDM is also investigated. Two public datasets CBIS-DDSM and INbreast have been used for the purpose. In the final phase of research, a fully automatic mass detection framework is proposed which uses the whole mammogram as the input (instead of image patches) and provides the localisation of the lesion within this mammogram as the output. For this purpose, OPTIMAM Mammography Image Database (OMI-DB) is used.
The results obtained as part of this thesis showed higher performances compared to state-of-the-art methods, indicating that the proposed methods and frameworks have the potential to be implemented within advanced CAD systems, which can be used by radiologists in the breast cancer screening