0


超越头条新闻:如何在野外充分利用机器学习模型

Beyond the headlines: How to make the best of machine learning models in the wild
课程网址: http://videolectures.net/NGSchool2019_al_moubayed_beyond_the_head...  
主讲教师: Noura Al Moubayed
开课单位: 杜伦大学
开课时间: 2019-12-03
课程语种: 英语
中文简介:
机器学习在各种应用领域中都取得了空前的成果。由于其巨大的社会潜力,医学一直是人工智能应用的重要领域。机器学习模型现在能够根据医学影像可靠地诊断癌症,并协助医生更有效地为患者提供更好的护理。问题是我们可以信任这些模型多少?最近,已经证明,深度神经网络容易受到对抗性攻击,在这种攻击中,设计的虚假输入会导致错误分类。由于最近的癌症诊断模型失败,美国食品和药物管理局目前正在审查其在医疗设备和诊断中接受机器学习模型的政策。因此,不仅期望机器学习模型能够准确执行,而且必须在模型性能,偏差和持续维护方面遵守严格的标准。最重要的是,在医学等关键领域,该模型必须能够解释其决策过程。我将介绍建立机器学习模型的最新进展,这些模型对对抗攻击具有鲁棒性并可以解释其结果。
课程简介: Machine learning has achieved unprecedented results in a variety of application areas. Medical science has always been an area of high importance for AI applications due to its high social potential impact. Machine learning models are now able to reliably diagnose cancer from medical imaging and to assist physicians providing better care to their patients more efficiently. The question is how much can we trust these models? Recently, deep neural networks have been shown to be vulnerable to adversarial attacks where a designed fake input can lead to misclassification. The US Food and Drug Administration is currently reviewing its policy on accepting machine learning models in medical devices and diagnostics due a recent case of a failed cancer diagnostic model. Hence, machine learning models are not only expected to perform accurately, but they have to adhere to strict criteria on model performance, bias, and ongoing maintenance. Most importantly in critical domains, like medicine, the model has to be able to explain its decision-making process. I will present recent advances in building machine learning models that are robust to adversarial attacks and can explain their outputs.
关 键 词: 机器学习; 深度神经网络; 医学
课程来源: 视频讲座网
数据采集: 2020-08-04:yumf
最后编审: 2020-08-04:yumf
阅读次数: 58