University of Glasgow
Explainable deep learning models for healthcare - CDSS 3
University of Glasgow

Explainable deep learning models for healthcare - CDSS 3

Fani Deligianni

位教师:Fani Deligianni

1,798 人已注册

包含在 Coursera Plus

深入了解一个主题并学习基础知识。
4.6

(15 条评论)

中级 等级

推荐体验

3 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度
深入了解一个主题并学习基础知识。
4.6

(15 条评论)

中级 等级

推荐体验

3 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度

您将学到什么

  • Program global explainability methods in time-series classification

  • Program local explainability methods for deep learning such as CAM and GRAD-CAM

  • Understand axiomatic attributions for deep learning networks

  • Incorporate attention in Recurrent Neural Networks and visualise the attention weights

要了解的详细信息

可分享的证书

添加到您的领英档案

作业

5 项作业

授课语言:英语(English)

了解顶级公司的员工如何掌握热门技能

Petrobras, TATA, Danone, Capgemini, P&G 和 L'Oreal 的徽标

积累特定领域的专业知识

本课程是 Informed Clinical Decision Making using Deep Learning 专项课程 专项课程的一部分
在注册此课程时,您还会同时注册此专项课程。
  • 向行业专家学习新概念
  • 获得对主题或工具的基础理解
  • 通过实践项目培养工作相关技能
  • 获得可共享的职业证书

该课程共有4个模块

Deep learning models are complex and it is difficult to understand their decisions. Explainability methods aim to shed light to the deep learning decisions and enhance trust, avoid mistakes and ensure ethical use of AI. Explanations can be categorised as global, local, model-agnostic and model-specific. Permutation feature importance is a global, model agnostic explainabillity method that provide information with relation to which input variables are more related to the output.

涵盖的内容

6个视频8篇阅读材料1个作业1个讨论话题5个非评分实验室

Local explainability methods provide explanations on how the model reach a specific decision. LIME approximates the model locally with a simpler, interpretable model. SHAP expands on this and it is also designed to address multi-collinearity of the input features. Both LIME and SHAP are local, model-agnostic explanations. On the other hand, CAM is a class-discriminative visualisation techniques, specifically designed to provide local explanations in deep neural networks.

涵盖的内容

5个视频7篇阅读材料1个作业1个讨论话题7个非评分实验室

GRAD-CAM is an extension of CAM, which aims to a broader application of the architecture in deep neural networks. Although, it is one of the most popular methods in explaining deep neural network decisions, it violates key axiomatic properties, such as sensitivity and completeness. Integrated gradients is an axiomatic attribution method that aims to cover this gap.

涵盖的内容

4个视频6篇阅读材料1个作业1个讨论话题7个非评分实验室

Attention in deep neural networks mimics human attention that allocates computational resources to a small range of sensory input in order to process specific information with limited processing power. In this week, we discuss how to incorporate attention in Recurrent Neural Networks and autoencoders. Furthermore, we visualise attention weights in order to provide a form of inherent explanation for the decision making process.

涵盖的内容

3个视频3篇阅读材料2个作业1个讨论话题4个非评分实验室

获得职业证书

将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。

位教师

Fani Deligianni
University of Glasgow
5 门课程5,884 名学生

提供方

从 Machine Learning 浏览更多内容

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.
自 2018开始学习的学生
''能够按照自己的速度和节奏学习课程是一次很棒的经历。只要符合自己的时间表和心情,我就可以学习。'
Jennifer J.
自 2020开始学习的学生
''我直接将从课程中学到的概念和技能应用到一个令人兴奋的新工作项目中。'
Larry W.
自 2021开始学习的学生
''如果我的大学不提供我需要的主题课程,Coursera 便是最好的去处之一。'
Chaitanya A.
''学习不仅仅是在工作中做的更好:它远不止于此。Coursera 让我无限制地学习。'
Coursera Plus

通过 Coursera Plus 开启新生涯

无限制访问 10,000+ 世界一流的课程、实践项目和就业就绪证书课程 - 所有这些都包含在您的订阅中

通过在线学位推动您的职业生涯

获取世界一流大学的学位 - 100% 在线

加入超过 3400 家选择 Coursera for Business 的全球公司

提升员工的技能,使其在数字经济中脱颖而出

常见问题