As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal justice, it is critical that those responsible for building these systems think outside the black box and develop systems that are not only accurate, but also transparent and trustworthy. This course is a comprehensive, hands-on guide to Interpretable Machine Learning, empowering you to develop AI solutions that are aligned with responsible AI principles. You will also gain an understanding of the emerging field of Mechanistic Interpretability and its use in understanding large language models.


您将学到什么
Describe and implement regression and generalized interpretable models
Demonstrate knowledge of decision trees, rules, and interpretable neural networks
Explain foundational Mechanistic Interpretability concepts, hypotheses, and experiments
您将获得的技能
- Regression Analysis
- Responsible AI
- Data-Driven Decision-Making
- Artificial Neural Networks
- Large Language Modeling
- Python Programming
- Decision Tree Learning
- Deep Learning
- Data Ethics
- Artificial Intelligence and Machine Learning (AI/ML)
- Predictive Modeling
- Machine Learning
- Artificial Intelligence
- Statistical Modeling
要了解的详细信息

添加到您的领英档案
3 项作业
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有3个模块
In this module, you will be introduced to the concepts of regression and generalized models for interpretability. You will learn how to describe interpretable machine learning and differentiate between interpretability and explainability, explain and implement regression models in Python, and demonstrate knowledge of generalized models in Python. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.
涵盖的内容
5个视频7篇阅读材料1个作业2个讨论话题3个非评分实验室
In this module, you will be introduced to the concepts of decision trees, decision rules, and interpretability in neural networks. You will learn how to explain and implement decision trees and decision rules in Python and define and explain neural network interpretable model approaches, including prototype-based networks, monotonic networks, and Kolmogorov-Arnold networks. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.
涵盖的内容
8个视频1篇阅读材料1个作业2个讨论话题3个非评分实验室
In this module, you will be introduced to the concept of Mechanistic Interpretability. You will learn how to explain foundational Mechanistic Interpretability concepts, including features and circuits; describe the Superposition Hypothesis; and define Representation Learning to be able to analyze current research on scaling Representation Learning to LLMs. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.
涵盖的内容
6个视频5篇阅读材料1个作业3个讨论话题1个非评分实验室
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
位教师

从 Machine Learning 浏览更多内容
- 状态:免费试用
Duke University
Coursera Project Network
Coursera Project Network
Coursera Project Network
人们为什么选择 Coursera 来帮助自己实现职业发展




常见问题
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you enroll in the course, you get access to all of the courses in the Specialization, and you earn a certificate when you complete the work. Your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
更多问题
提供助学金,