0


标题之外:如何在野外充分利用机器学习模型

Beyond the headlines: How to make the best of machine learning models in the wild
课程网址: http://videolectures.net/NGSchool2019_al_moubayed_beyond_the_head...  
主讲教师: Noura Al Moubayed
开课单位: 杜伦大学
开课时间: 2019-12-03
课程语种: 英语
中文简介:
机器学习在许多应用领域取得了前所未有的成果。医学科学一直是人工智能应用的重要领域,因为它具有很高的社会潜力。机器学习模型现在能够从医学影像中可靠地诊断癌症,并帮助医生更有效地为患者提供更好的护理。问题是我们能在多大程度上信任这些模型?最近,深层神经网络被证明容易受到敌方攻击,其中设计的假输入可能导致错误分类。美国食品和药物管理局(Food and Drug Administration)目前正在审查其在医疗设备和诊断中接受机器学习模型的政策,因为最近有一个癌症诊断模型失败。因此,机器学习模型不仅被期望准确地执行,而且它们必须遵守关于模型性能、偏差和持续维护的严格标准。最重要的是,在医学等关键领域,模型必须能够解释其决策过程。我将介绍在建立机器学习模型方面的最新进展,这些模型对对抗性攻击是鲁棒的,并且可以解释它们的输出。
课程简介: Machine learning has achieved unprecedented results in a variety of application areas. Medical science has always been an area of high importance for AI applications due to its high social potential impact. Machine learning models are now able to reliably diagnose cancer from medical imaging and to assist physicians providing better care to their patients more efficiently. The question is how much can we trust these models? Recently, deep neural networks have been shown to be vulnerable to adversarial attacks where a designed fake input can lead to misclassification. The US Food and Drug Administration is currently reviewing its policy on accepting machine learning models in medical devices and diagnostics due a recent case of a failed cancer diagnostic model. Hence, machine learning models are not only expected to perform accurately, but they have to adhere to strict criteria on model performance, bias, and ongoing maintenance. Most importantly in critical domains, like medicine, the model has to be able to explain its decision-making process. I will present recent advances in building machine learning models that are robust to adversarial attacks and can explain their outputs.
关 键 词: 癌症; 药物诊断; 鲁棒性
课程来源: 视频讲座网
数据采集: 2020-12-28:yxd
最后编审: 2020-12-28:yxd
阅读次数: 55