肿瘤(癌症)患者之家
首页
癌症知识
肿瘤中医药治疗
肿瘤药膳
肿瘤治疗技术
前沿资讯
临床试验招募
登录/注册
VIP特权
广告
广告加载中...

文章:

医学影像分类中的对抗性攻击

Adversarial Attacks on Medical Image Classification

原文发布日期:23 August 2023

DOI: 10.3390/cancers15174228

类型: Article

开放获取: 是

 

英文摘要:

Due to the growing number of medical images being produced by diverse radiological imaging techniques, radiography examinations with computer-aided diagnoses could greatly assist clinical applications. However, an imaging facility with just a one-pixel inaccuracy will lead to the inaccurate prediction of medical images. Misclassification may lead to the wrong clinical decision. This scenario is similar to the adversarial attacks on deep learning models. Therefore, one-pixel and multi-pixel level attacks on a Deep Neural Network (DNN) model trained on various medical image datasets are investigated in this study. Common multiclass and multi-label datasets are examined for one-pixel type attacks. Moreover, different experiments are conducted in order to determine how changing the number of pixels in the image may affect the classification performance and robustness of diverse DNN models. The experimental results show that it was difficult for the medical images to survive the pixel attacks, raising the issue of the accuracy of medical image classification and the importance of the model’s ability to resist these attacks for a computer-aided diagnosis.

 

摘要翻译: 

随着各种放射成像技术产生的医学影像数量日益增多,计算机辅助诊断的放射学检查可为临床应用提供重要支持。然而,成像设备即使仅存在单像素级误差,也可能导致医学影像的预测结果失准。分类错误可能引发错误的临床决策,这种情况与深度学习模型面临的对抗性攻击具有相似性。因此,本研究针对在不同医学影像数据集上训练的深度神经网络模型,开展了单像素及多像素级别的攻击实验。研究对常见的多类别和多标签数据集进行了单像素型攻击测试,并通过设计多组实验,探究图像像素数量变化对各类深度神经网络模型分类性能及鲁棒性的影响。实验结果表明,医学影像难以有效抵御像素级攻击,这引发了关于医学影像分类准确性的思考,并凸显了计算机辅助诊断系统抵抗此类攻击能力的重要性。

 

原文链接:

Adversarial Attacks on Medical Image Classification

广告
广告加载中...