AI for Radiology

Posted on Wed 25 September 2024 in research

The application of Artificial Intelligence (AI) in radiology has numerous benefits, including improved diagnostic accuracy for X-ray and CT scans, enhanced image analysis of MRI and PET scans, and increased efficiency in reviewing ultrasound and mammography images. AI-powered systems can automate routine tasks related to chest X-rays, abdominal CTs, and bone density scans, freeing up radiologists to focus on complex cases like brain MRIs, cardiac CTA scans, and breast cancer screening mammograms. For example, AI can be used to detect active pulmonary tuberculosis (TB) by analyzing X-ray images for signs of disease progression, such as cavitation or fibrosis. By automating the detection of these changes, AI-powered systems can help healthcare professionals diagnose TB, ultimately improving patient outcomes.

Radiological signs

Example Application: Radiological signs on healthy (left) and active Pulmonary TB-affected lungs (right). AI models can be used to detect TB from CXR images.

Partnerships

  • Prof. Dr. Med. Anete Trajman, Dr. José Manoel de Seixas, Dr. Natanael Moura Jr., from the Federal University of Rio de Janeiro
  • Prof. Dr. Med. Lucia Mazzolai, from CHUV (Lausanne university hospital), Switzerland
  • Prof. Manuel Günther, University of Zürich, Switzerland
  • Dr. Eugenio Canes and André Baceti, Murabei Data Science, Brazil

Radiomics: In (JimenezdelToro et al., 2024) We've developed an evaluation framework for extracting 3D deep radiomics features using pre-trained neural networks on real computed tomography (CT) scans, allowing comprehensive quantification of tissue characteristics. We compared these new features with standard hand-crafted radiomic features, demonstrating that our proposed 3D deep learning radiomics are at least twice more stable across different CT parameter variations than any category of traditional features. Notably, even when trained on an external dataset and task, our generic deep radiomics showed excellent stability and discriminative power for tissue characterization between different classes of liver lesions and normal tissue, with an average accuracy of 93.5%.

Chest X-ray CAD for Tuberculosis: In (Geric et al., 2023), We conducted a review of computer-aided detection (CAD) software for TB detection, highlighting its potential as a valuable tool but also several implementation challenges. Our assessment emphasizes the need for further research to address issues related to diagnostic heterogeneity, regulatory frameworks, and technology adaptation to meet the needs of high-burden settings and vulnerable populations. In (Raposo et al., 2022) we proposed an new approach to develop more generalizable computer-aided detection (CAD) systems for pulmonary tuberculosis (PT) screening from Chest X-Ray images. Our method used radiological signs as intermediary proxies to detect PT, rather than relying on direct image-to-probability detection techniques that often fail to generalize across different datasets. We developed a multi-class deep learning model that maps images to 14 key radiological signs, followed by a second model that predicts PT diagnosis from these signs. Our approach demonstrated superior generalization capabilities compared to traditional CAD models, achieving higher area under the specificity vs. sensitivity curve (AUC) values on cross-dataset evaluation scenarios. Building on this, in (Guler et al., 2024) we developed reliable techniques to train deep learning models for PT detection. By pre-training a neural network on a proxy task and using a technique called Mixed Objective Optimization Network (MOON) to balance classes during training, we demonstrate that our approach can improve alignment with human experts' decision-making processes while maintaining perfect classification accuracy. Notably, this method also enhances generalization on unseen datasets, making it more suitable for real-world applications. We made our source code publicly available online for reproducibility purposes.

Semanatic Segmentation: In (Renzo et al., 2021) we investigated the impact of digitization on X-Ray images and their effect on deep neural networks trained for lung segmentation, highlighting the need to adapt these models to accurately analyze digitized data. Our results show that while our model performs exceptionally well (AUPRC: 0.99) at identifying lung regions in digital X-Rays, its performance drops significantly (AUPRC: 0.90) when applied to digitized images. We also found that traditional performance metrics, such as maximum F1 score and area under the precision-recall curve (AUPRC), may not be sufficient to characterize segmentation problems in test images, particularly due to the natural connectivity of lungs in X-Ray images.

Demographic Fairness: In (QueirozNeto et al., 2024) we tackled the challenge of ensuring consistent performance and fairness in machine learning models for medical image diagnostics, with a focus on chest X-ray images. We proposed using Foundation Models as an embedding extractor to create groups representing protected attributes, such as gender and age. This approach can effectively group individuals by gender in both in- and out-of-distribution scenarios, reducing bias by up to 6.2%. However, the model's robustness in handling age attributes is limited, highlighting a need for more fundamentally fair and robust Foundation models. These findings contribute to the development of more equitable medical diagnostics, particularly where protected attribute information is lacking.


Bibliography

C. Geric, Z. Z. Qin, C. M. Denkinger, S. V. Kik, B. Marais, André Anjos, P.-M. David, F. A. Khan, and A. Trajman. The rise of artificial intelligence reading of chest x-rays for enhanced tb diagnosis and elimination. INT J TUBERC LUNG DIS, 5 2023. doi:10.5588/ijtld.22.0687.

Özgür Güler, Manuel Günther, and André Anjos. Refining tuberculosis detection in cxr imaging: addressing bias in deep neural networks via interpretability. In Proceedings of the 12th European Workshop on Visual Information Processing. September 2024.

Oscar Jimenez-del-Toro, Christoph Aberle, Roger Schaer, Michael Bach, Kyriakos Flouris, Ender Konukoglu, Bram Stieltjes, Markus M. Obmann, André Anjos, Henning Müller, and Adrien Depeursinge. Comparing stability and discriminatory power of hand-crafted versus deep radiomics: a 3d-printed anthropomorphic phantom study. In Proceedings of the 12th European Workshop on Visual Information Processing. September 2024.

Dilermando Queiroz Neto, André Anjos, and Lilian Berton. Using backbone foundation model for evaluating fairness in chest radiography without demographic data. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI). October 2024.

Geoffrey Raposo, Anete Trajman, and André Anjos. Pulmonary tuberculosis screening from radiological signs on chest x-ray images using deep models. In Union World Conference on Lung Health. The Union, November 2022.

Matheus A. Renzo, Natália Fernandez, André A. Baceti, Natanael Nunes Moura Junior, and André Anjos. Development of a lung segmentation algorithm for analog imaged chest x-ray: preliminary results. In Anais do 15. Congresso Brasileiro de Inteligência Computacional, 1–8. SBIC, October 2021. URL: https://sbic.org.br/eventos/cbic_2021/cbic2021-123/, doi:10.21528/CBIC2021-123.