肿瘤(癌症)患者之家
首页
癌症知识
肿瘤中医药治疗
肿瘤药膳
肿瘤治疗技术
前沿资讯
临床试验招募
登录/注册
VIP特权
广告
广告加载中...

文章:

探索人工智能在癌症诊断预测模型中的偏见问题

Exploring Artificial Intelligence Biases in Predictive Models for Cancer Diagnosis

原文发布日期:26 January 2025

DOI: 10.3390/cancers17030407

类型: Article

开放获取: 是

 

英文摘要:

The American Society of Clinical Oncology (ASCO) has released the principles for the responsible use of artificial intelligence (AI) in oncology emphasizing fairness, accountability, oversight, equity, and transparency. However, the extent to which these principles are followed is unknown. The goal of this study was to assess the presence of biases and the quality of studies on AI models according to the ASCO principles and examine their potential impact through citation analysis and subsequent research applications. A review of original research articles centered on the evaluation of predictive models for cancer diagnosis published in the ASCO journal dedicated to informatics and data science in clinical oncology was conducted. Seventeen potential bias criteria were used to evaluate the sources of bias in the studies, aligned with the ASCO’s principles for responsible AI use in oncology. The CREMLS checklist was applied to assess the study quality, focusing on the reporting standards, and the performance metrics along with citation counts of the included studies were analyzed. Nine studies were included. The most common biases were environmental and life-course bias, contextual bias, provider expertise bias, and implicit bias. Among the ASCO principles, the least adhered to were transparency, oversight and privacy, and human-centered AI application. Only 22% of the studies provided access to their data. The CREMLS checklist revealed the deficiencies in methodology and evaluation reporting. Most studies reported performance metrics within moderate to high ranges. Additionally, two studies were replicated in the subsequent research. In conclusion, most studies exhibited various types of bias, reporting deficiencies, and failure to adhere to the principles for responsible AI use in oncology, limiting their applicability and reproducibility. Greater transparency, data accessibility, and compliance with international guidelines are recommended to improve the reliability of AI-based research in oncology.

 

摘要翻译: 

美国临床肿瘤学会(ASCO)发布了肿瘤学领域负责任使用人工智能(AI)的原则,强调公平性、问责制、监督、公正性与透明度。然而,这些原则的遵循程度尚不明确。本研究旨在依据ASCO原则评估AI模型研究中存在的偏倚及研究质量,并通过引文分析和后续研究应用探讨其潜在影响。我们对ASCO旗下专注于临床肿瘤学信息学与数据科学的期刊中发表的、以癌症诊断预测模型评估为核心的原研文章进行了系统性回顾。研究采用17项潜在偏倚标准评估文献中的偏倚来源,这些标准与ASCO提出的肿瘤学AI负责任使用原则相一致。同时运用CREMLS清单评估研究质量(重点关注报告规范),并对纳入研究的性能指标及被引频次进行分析。共纳入9项研究。最常见的偏倚类型包括环境与生命历程偏倚、情境偏倚、医疗提供者专业能力偏倚及隐性偏倚。在ASCO原则中,透明度、监督与隐私保护以及以人为本的AI应用原则的遵循程度最低。仅22%的研究公开了研究数据。CREMLS清单揭示了方法学与评估报告方面的缺陷。多数研究报道的性能指标处于中高水平。此外,有两项研究在后续研究中得到复现。综上所述,大多数研究存在多种类型的偏倚、报告缺陷,且未能完全遵循肿瘤学AI负责任使用原则,这限制了其适用性与可重复性。建议通过提升透明度、数据可及性及遵循国际指南,以增强肿瘤学AI研究的可靠性。

 

原文链接:

Exploring Artificial Intelligence Biases in Predictive Models for Cancer Diagnosis

广告
广告加载中...