Recent Submissions

  • Explained deep learning framework for COVID-19 detection in volumetric CT images aligned with the British Society of Thoracic Imaging reporting guidance : a pilot study

    Fouad, Shereen; Usman, Muhammad; Kabir, Ra'eesa; Rajasekaran, Arvind; Morlese, John; Nagori, Pankaj; Bhatia, Bahadar; Rajasekaran, Arvind; Morlese, John; Nagori, Pankaj; et al. (Springer, 2025-02-26)
    In March 2020, the British Society of Thoracic Imaging (BSTI) introduced a reporting guidance for COVID-19 detection to streamline standardised reporting and enhance agreement between radiologists. However, most current DL methods do not conform to this guidance. This study introduces a multi-class deep learning (DL) model to identify BSTI COVID-19 categories within CT volumes, classified as 'Classic', 'Probable', 'Indeterminate', or 'Non-COVID'. A total of 56 CT pseudoanonymised images were collected from patients with suspected COVID-19 and annotated by an experienced chest subspecialty radiologist following the BSTI guidance. We evaluated the performance of multiple DL-based models, including three-dimensional (3D) ResNet architectures, pre-trained on the Kinetics-700 video dataset. For better interpretability of the results, our approach incorporates a post-hoc visual explainability feature to highlight the areas of the image most indicative of the COVID-19 category. Our four-class classification DL framework achieves an overall accuracy of 75%. However, the model struggled to detect the 'Indeterminate' COVID-19 group, whose removal significantly improved the model's accuracy to 90%. The proposed explainable multi-classification DL model yields accurate detection of 'Classic', 'Probable', and 'Non-COVID' categories with poor detection ability for 'Indeterminate' COVID-19 cases. These findings are consistent with clinical studies that aimed at validating the BSTI reporting manually amongst consultant radiologists.
  • Evaluating explainable Artificial Intelligence (XAI) techniques in chest radiology imaging through a human-centered lens

    E Ihongbe, Izegbua; Fouad, Shereen; F Mahmoud, Taha; Rajasekaran, Arvind; Bhatia, Bahadar; Rajasekaran, Arvind; Bhatia, Bahadar; Radiology; Medical and Dental; Aston University; University Hospital of Sharjah; Sandwell and West Birmingham NHS Trust; University of Leicester (Public Library of Science, 2024-10-09)
    The field of radiology imaging has experienced a remarkable increase in using of deep learning (DL) algorithms to support diagnostic and treatment decisions. This rise has led to the development of Explainable AI (XAI) system to improve the transparency and trust of complex DL methods. However, XAI systems face challenges in gaining acceptance within the healthcare sector, mainly due to technical hurdles in utilizing these systems in practice and the lack of human-centered evaluation/validation. In this study, we focus on visual XAI systems applied to DL-enabled diagnostic system in chest radiography. In particular, we conduct a user study to evaluate two prominent visual XAI techniques from the human perspective. To this end, we created two clinical scenarios for diagnosing pneumonia and COVID-19 using DL techniques applied to chest X-ray and CT scans. The achieved accuracy rates were 90% for pneumonia and 98% for COVID-19. Subsequently, we employed two well-known XAI methods, Grad-CAM (Gradient-weighted Class Activation Mapping) and LIME (Local Interpretable Model-agnostic Explanations), to generate visual explanations elucidating the AI decision-making process. The visual explainability results were shared through a user study, undergoing evaluation by medical professionals in terms of clinical relevance, coherency, and user trust. In general, participants expressed a positive perception of the use of XAI systems in chest radiography. However, there was a noticeable lack of awareness regarding their value and practical aspects. Regarding preferences, Grad-CAM showed superior performance over LIME in terms of coherency and trust, although concerns were raised about its clinical usability. Our findings highlight key user-driven explainability requirements, emphasizing the importance of multi-modal explainability and the necessity to increase awareness of XAI systems among medical practitioners. Inclusive design was also identified as a crucial need to ensure better alignment of these systems with user needs.
  • Apparent diffusion coefficient (ADC): A potential in vivo biological surrogate of the incidentally discovered bone lesions at 3T MRI.

    Nouh, M R; Doweidar, Ahmed; Khalil, Abdullah Mohie-Eddin; Doweidar, Ahmed; Radiology; Medical and Dental; Alexandria University; El-Razi Hospital; Sandwell and West Birmingham NHS Trust (Elsevier, 2021-11-25)
    The mean ADC value (mean±SD) of all malignant tumors (including cartilaginous neoplasms) was [0.92 ± 0.40] × 10-3 mm2/s. This significantly differed from those of both primary benign tumors [1.14 ± 0.24] × 10-3 mm2/s, (p = 0.011), and all non-malignant lesions collectively [1.29 ± 0.44] × 10-3 mm2/s, (p < 0.001). Using mADC value of ≤ 1.1 × 10-3 mm2/s resulted in 86.1% sensitivity and 62.5% specificity for characterizing a lesion as a malignant. The inter-rater reliability was almost perfect (95% CI = 0.954-0.985).
  • Audit of adequacy of the large joints magnetic resonance imaging

    Doweidar, Ahmed; Murphy, Aoife; Elsakaan, Mohamed; Hashmi, Muhammad; Murphy, Aoife; Elsakaan, Mohamed; Hashmi, Muhammad; Radiography; Allied Health Professional; Medical and Dental; et al. (Elsevier, 2022-11)
    No abstract available