We introduce an innovative, simple, effective segmentation-free approach for survival analysis of head and neck cancer (HNC) patients from PET/CT images. By harnessing deep learning-based feature extraction techniques and multi-angle maximum intensity projections (MA-MIPs) applied to Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) images, our proposed method eliminates the need for manual segmentations of regions-of-interest (ROIs) such as primary tumors and involved lymph nodes. Instead, a state-of-the-art object detection model is trained utilizing the CT images to perform automatic cropping of the head and neck anatomical area, instead of only the lesions or involved lymph nodes on the PET volumes. A pre-trained deep convolutional neural network backbone is then utilized to extract deep features from MA-MIPs obtained from 72 multi-angel axial rotations of the cropped PET volumes. These deep features extracted from multiple projection views of the PET volumes are then aggregated and fused, and employed to perform recurrence-free survival analysis on a cohort of 489 HNC patients. The proposed approach outperforms the best performing method on the target dataset for the task of recurrence-free survival analysis. By circumventing the manual delineation of the malignancies on the FDG PET-CT images, our approach eliminates the dependency on subjective interpretations and highly enhances the reproducibility of the proposed survival analysis method. The code for this work is publicly released.
我们提出了一种创新、简单且有效的无分割方法,用于基于PET/CT图像进行头颈癌(HNC)患者的生存分析。该方法通过利用基于深度学习的特征提取技术,并结合氟代脱氧葡萄糖正电子发射断层扫描(FDG-PET)图像的多角度最大强度投影(MA-MIPs),避免了手动勾画感兴趣区域(ROIs,如原发肿瘤及受累淋巴结)的需求。我们采用先进的物体检测模型,基于CT图像自动裁剪头颈部解剖区域,而非仅针对PET图像中的病灶或受累淋巴结。随后,利用预训练的深度卷积神经网络主干,从经过72次多角度轴向旋转的裁剪后PET体积数据生成的MA-MIPs中提取深度特征。这些从PET体积多投影视角提取的深度特征经过聚合与融合后,应用于489例HNC患者队列的无复发生存分析。该方法在目标数据集的无复发生存分析任务中表现优于现有最佳方法。通过规避FDG PET-CT图像中恶性肿瘤的手动勾画,我们的方法消除了对主观解释的依赖,并显著提升了所提生存分析方法的可重复性。本研究的代码已公开发布。