Background/Objectives: Surgical pathology of tubo-ovarian and peritoneal cancer carries a well-recognised diagnostic workload, partly due to the large amount of non-primary tumour-related tissue requiring assessment for the presence of metastatic disease. The lymph nodes and omentum are almost universally included in such resection cases and contribute considerably to this burden, principally due to volume rather than task complexity. To date, artificial intelligence (AI)-based studies have reported good success rates in identifying nodal spread in other malignancies, but the development of such time-saving assistive digital solutions has been neglected in ovarian cancer. This study aimed to detect the presence or absence of metastatic ovarian carcinoma in the lymph nodes and omentum.Methods: We used attention-based multiple-instance learning (ABMIL) with a vision-transformer foundation model to classify whole-slide images (WSIs) as either containing ovarian carcinoma metastases or not. Training and validation were conducted with a total of 855 WSIs of surgical resection specimens collected from 404 patients at Leeds Teaching Hospitals NHS Trust.Results: Ensembled classification from hold-out testing reached an AUROC of 0.998 (0.985–1.0) and a balanced accuracy of 100% (100.0–100.0%) in the lymph node set, and an AUROC of 0.963 (0.911–0.999) and a balanced accuracy of 98.0% (94.8–100.0%) in the omentum set.Conclusions: This model shows great potential in the identification of ovarian carcinoma nodal and omental metastases, and could provide clinical utility through its ability to pre-screen WSIs prior to histopathologist review. In turn, this could offer significant time-saving benefits and streamline clinical diagnostic workflows, helping to address the chronic staffing shortages in histopathology.
背景/目的:输卵管-卵巢及腹膜癌的手术病理学诊断工作量众所周知,部分原因在于需要评估大量非原发肿瘤相关组织是否存在转移性疾病。淋巴结和大网膜几乎普遍包含在此类切除病例中,并因其体积庞大(而非任务复杂性)显著增加了这一负担。迄今为止,基于人工智能(AI)的研究在其他恶性肿瘤的淋巴结转移识别方面已取得较高成功率,但此类可节省时间的辅助数字解决方案在卵巢癌领域的发展尚未得到重视。本研究旨在检测淋巴结和大网膜中是否存在转移性卵巢癌。 方法:我们采用基于注意力的多示例学习(ABMIL)结合视觉Transformer基础模型,对全切片图像(WSI)进行是否存在卵巢癌转移的分类。训练与验证共使用855张WSI,这些图像采集自利兹教学医院NHS信托基金404名患者的手术切除标本。 结果:在留出测试中,集成分类在淋巴结组达到AUROC 0.998(0.985–1.0)与平衡准确率100%(100.0–100.0%),在大网膜组达到AUROC 0.963(0.911–0.999)与平衡准确率98.0%(94.8–100.0%)。 结论:该模型在识别卵巢癌淋巴结及大网膜转移方面展现出巨大潜力,其能够在组织病理学家复核前对WSI进行预筛,具备临床实用价值。此举可显著节省诊断时间、优化临床诊断流程,有助于缓解组织病理学领域长期存在的人员短缺问题。