Purpose: To develop a deep learning framework based on a hybrid dataset to enhance the quality of CBCT images and obtain accurate HU values. Materials and Methods: A total of 228 cervical cancer patients treated in different LINACs were enrolled. We developed an encoder–decoder architecture with residual learning and skip connections. The model was hierarchically trained and validated on 5279 paired CBCT/planning CT images and tested on 1302 paired images. The mean absolute error (MAE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM) were utilized to access the quality of the synthetic CT images generated by our model. Results: The MAE between synthetic CT images generated by our model and planning CT was 10.93 HU, compared to 50.02 HU for the CBCT images. The PSNR increased from 27.79 dB to 33.91 dB, and the SSIM increased from 0.76 to 0.90. Compared with synthetic CT images generated by the convolution neural networks with residual blocks, our model had superior performance both in qualitative and quantitative aspects. Conclusions: Our model could synthesize CT images with enhanced image quality and accurate HU values. The synthetic CT images preserved the edges of tissues well, which is important for downstream tasks in adaptive radiotherapy.
目的:开发一种基于混合数据集的深度学习框架,以提升锥形束CT图像质量并获取准确的亨氏单位值。材料与方法:研究纳入228例在不同直线加速器接受治疗的宫颈癌患者。我们构建了具有残差学习和跳跃连接的编码器-解码器架构模型。该模型在5279组配对CBCT/计划CT图像上进行分层训练与验证,并在1302组配对图像上进行测试。采用平均绝对误差、峰值信噪比和结构相似性指数评估模型生成合成CT图像的质量。结果:模型生成的合成CT图像与计划CT之间的平均绝对误差为10.93亨氏单位,而原始CBCT图像误差为50.02亨氏单位。峰值信噪比从27.79分贝提升至33.91分贝,结构相似性指数从0.76提高至0.90。与采用残差块结构的卷积神经网络生成的合成CT图像相比,本模型在定性和定量评估中均表现出更优性能。结论:本模型能够合成具有更优图像质量和准确亨氏单位值的CT图像。合成CT图像能良好保持组织边缘特征,这对自适应放射治疗中的后续任务具有重要意义。