Background:Intraoperative ultrasound (ioUS) provides real-time imaging during neurosurgical procedures, with advantages such as portability and cost-effectiveness. Accurate tumor segmentation has the potential to substantially enhance the interpretability of ioUS images; however, its implementation is limited by persistent challenges, including noise, artifacts, and anatomical variability. This study aims to develop a convolutional neural network (CNN) model for glioma segmentation in ioUS images via a multicenter dataset.Methods:We retrospectively collected data from the BraTioUS and ReMIND datasets, including histologically confirmed gliomas with high-quality B-mode images. For each patient, the tumor was manually segmented on the 2D slice with its largest diameter. A CNN was trained using the nnU-Net framework. The dataset was stratified by center and divided into training (70%) and testing (30%) subsets, with external validation performed on two independent cohorts: the RESECT-SEG database and the Imperial College NHS Trust London cohort. Performance was evaluated using metrics such as the Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and 95th percentile Hausdorff distance (HD95).Results:The training cohort consisted of 197 subjects, 56 of whom were in the hold-out testing set and 53 in the external validation cohort. In the hold-out testing set, the model achieved a median DSC of 0.90, ASSD of 8.51, and HD95 of 29.08. On external validation, the model achieved a DSC of 0.65, ASSD of 14.14, and HD95 of 44.02 on the RESECT-SEG database and a DSC of 0.93, ASSD of 8.58, and HD95 of 28.81 on the Imperial-NHS cohort.Conclusions:This study supports the feasibility of CNN-based glioma segmentation in ioUS across multiple centers. Future work should enhance segmentation detail and explore real-time clinical implementation, potentially expanding ioUS’s role in neurosurgical resection.
背景:术中超声(ioUS)在神经外科手术中提供实时成像,具有便携性和成本效益等优势。精确的肿瘤分割有望显著提升ioUS图像的可解释性,但其应用仍受限于噪声、伪影和解剖结构变异等持续存在的挑战。本研究旨在通过多中心数据集开发一种用于ioUS图像中胶质瘤分割的卷积神经网络(CNN)模型。 方法:我们回顾性收集了BraTioUS和ReMIND数据集的数据,包括经组织学证实且具有高质量B模式图像的胶质瘤病例。针对每位患者,在肿瘤最大直径的二维切面上进行手动分割。使用nnU-Net框架训练CNN模型。数据集按中心分层,划分为训练集(70%)和测试集(30%),并在两个独立队列中进行外部验证:RESECT-SEG数据库和伦敦帝国学院NHS信托队列。性能评估采用戴斯相似系数(DSC)、平均对称表面距离(ASSD)和95%豪斯多夫距离(HD95)等指标。 结果:训练队列包含197例受试者,其中56例用于留出测试集,53例用于外部验证队列。在留出测试集中,模型的中位DSC为0.90,ASSD为8.51,HD95为29.08。在外部验证中,模型在RESECT-SEG数据库上获得DSC 0.65、ASSD 14.14、HD95 44.02;在帝国-NHS队列上获得DSC 0.93、ASSD 8.58、HD95 28.81。 结论:本研究证实了基于CNN的ioUS胶质瘤分割在多中心应用中的可行性。未来工作应提升分割细节精度,探索实时临床实施,有望拓展ioUS在神经外科切除术中的应用价值。