Background:Prostate cancer (PCa) represents a matter at the forefront of healthcare, being divided into clinically significant (csPCa) and indolent PCa based on prognostic and treatment options. Although multi-parametric magnetic resonance imaging (mpMRI) has enabled significant advances, it cannot differentiate between the aforementioned categories; therefore, in order to render the initial diagnosis, invasive procedures such as transrectal prostate biopsy are still necessary. In response to these challenges, artificial intelligence (AI)-based algorithms combined with radiomics features offer the possibility of creating a textural pixel pattern-based surrogate, which has the potential of correlating the medical imagery with the pathological report in a one-to-one manner.Objective: The aim of the present study was to develop a machine learning model that can differentiate indolent from csPCa lesions, as well as individually classifying each nodule into corresponding ISUP grades prior to prostate biopsy, using textural features derived from mpMRI T2WI acquisitions.Materials and Methods:The study was conducted in 154 patients and 201 individual prostatic lesions. All cases were scanned using the same 1.5 Tesla mpMRI machine, employing a standard protocol. Each nodule was manually delineated using the 3D Slicer platform (version 5.2.2) and textural parameters were derived using the PyRadiomics database (version 3.1.0). We compared three machine learning classification models (Random Forest, Support Vector Machine, and Logistic Regression) in full, partial and no correlation settings, in order to differentiate between indolent and csPCa, as well as between ISUP 2 and ISUP 3 lesions.Results: The median age was 65 years (IQR: 61–69), the mean PSA value was 10.27 ng/mL, and 76.61% of the segmented lesions had a PI-RADS score of 4 or higher. Overall, the highest performance was registered for the Random Forest model in the partial correlation setting, differentiating between indolent and csPCa and between ISUP 2 versus ISUP 3 lesions, with accuracies of 88.13% and 82.5%, respectively. When the models were trained on combined clinical data and radiomic signatures, these accuracies increased to 91.11% and 91.39%, respectively.Conclusions: We developed a machine learning decision support tool that accurately predicts the ISUP grade prior to prostate biopsy, based on the textural features extracted from T2 MRI acquisitions.
背景:前列腺癌(PCa)是医疗保健领域的前沿问题,根据预后和治疗方案可分为临床显著性前列腺癌(csPCa)和惰性前列腺癌。尽管多参数磁共振成像(mpMRI)已取得显著进展,但仍无法区分上述两类;因此,为进行初步诊断,仍需经直肠前列腺穿刺活检等侵入性操作。针对这些挑战,基于人工智能(AI)的算法结合影像组学特征,为创建基于纹理像素模式的替代指标提供了可能,该指标有望以一对一的方式将医学影像与病理报告相关联。 目的:本研究旨在开发一种机器学习模型,利用从mpMRI T2WI图像中提取的纹理特征,在前列腺穿刺活检前区分惰性与csPCa病灶,并将每个结节单独分类至相应的国际泌尿病理学会(ISUP)分级。 材料与方法:本研究纳入154例患者共201个独立前列腺病灶。所有病例均采用同一台1.5特斯拉mpMRI设备按标准方案扫描。使用3D Slicer平台(5.2.2版)手动勾画每个结节,并采用PyRadiomics数据库(3.1.0版)提取纹理参数。我们在完全相关、部分相关及无相关三种设置下,比较了三种机器学习分类模型(随机森林、支持向量机和逻辑回归)在区分惰性与csPCa病灶,以及ISUP 2级与ISUP 3级病灶方面的性能。 结果:患者中位年龄为65岁(四分位距:61-69岁),平均前列腺特异性抗原(PSA)值为10.27 ng/mL,76.61%的勾画病灶PI-RADS评分≥4分。总体而言,随机森林模型在部分相关设置中表现最佳,区分惰性与csPCa病灶的准确率为88.13%,区分ISUP 2级与ISUP 3级病灶的准确率为82.5%。当模型结合临床数据和影像组学特征进行训练时,准确率分别提升至91.11%和91.39%。 结论:我们开发了一种基于T2 MRI图像纹理特征的机器学习决策支持工具,能够在前列腺穿刺活检前准确预测ISUP分级。