Coursera
Secure AI: Red-Teaming & Safety Filters

以 199 美元(原价 399 美元)购买一年 Coursera Plus,享受无限增长。立即节省

Coursera

Secure AI: Red-Teaming & Safety Filters

Brian Newman
Starweaver

位教师:Brian Newman

包含在 Coursera Plus

深入了解一个主题并学习基础知识。
中级 等级

推荐体验

3 小时 完成
灵活的计划
自行安排学习进度
深入了解一个主题并学习基础知识。
中级 等级

推荐体验

3 小时 完成
灵活的计划
自行安排学习进度

您将学到什么

  • Design red-teaming scenarios to identify vulnerabilities and attack vectors in large language models using structured adversarial testing.

  • Implement content-safety filters to detect and mitigate harmful outputs while maintaining model performance and user experience.

  • Evaluate and enhance LLM resilience by analyzing adversarial inputs and developing defense strategies to strengthen overall AI system security.

要了解的详细信息

可分享的证书

添加到您的领英档案

最近已更新!

December 2025

授课语言:英语(English)

了解顶级公司的员工如何掌握热门技能

Petrobras, TATA, Danone, Capgemini, P&G 和 L'Oreal 的徽标

该课程共有3个模块

This module introduces participants to the systematic creation and execution of red-teaming scenarios targeting large language models. Students learn to identify common vulnerability categories including prompt injection, jailbreaking, and data extraction attacks. The module demonstrates how to design realistic adversarial scenarios that mirror real-world attack patterns, using structured methodologies to probe LLM weaknesses. Hands-on demonstrations show how red-teamers simulate malicious user behavior to uncover security gaps before deployment.

涵盖的内容

4个视频2篇阅读材料1次同伴评审

This module covers the design, implementation, and evaluation of content-safety filters for LLM applications. Participants explore multi-layered defense strategies including input sanitization, output filtering, and behavioral monitoring systems. The module demonstrates how to configure safety mechanisms that balance security with functionality, and shows practical testing methods to validate filter effectiveness against sophisticated bypass attempts. Real-world examples illustrate the challenges of maintaining robust content filtering while preserving user experience.

涵盖的内容

3个视频1篇阅读材料1次同伴评审

This module focuses on comprehensive resilience testing and systematic improvement of AI system robustness. Students learn to conduct thorough security assessments that measure LLM resistance to adversarial inputs, evaluate defense mechanism effectiveness, and identify areas for improvement. The module demonstrates how to establish baseline security metrics, implement iterative hardening processes, and validate improvements through continuous testing. Participants gain skills in developing robust AI systems that maintain integrity under real-world adversarial conditions.

涵盖的内容

4个视频1篇阅读材料1个作业2次同伴评审

位教师

Brian Newman
Coursera
3 门课程934 名学生

提供方

Coursera

从 Computer Security and Networks 浏览更多内容

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.
自 2018开始学习的学生
''能够按照自己的速度和节奏学习课程是一次很棒的经历。只要符合自己的时间表和心情,我就可以学习。'
Jennifer J.
自 2020开始学习的学生
''我直接将从课程中学到的概念和技能应用到一个令人兴奋的新工作项目中。'
Larry W.
自 2021开始学习的学生
''如果我的大学不提供我需要的主题课程,Coursera 便是最好的去处之一。'
Chaitanya A.
''学习不仅仅是在工作中做的更好:它远不止于此。Coursera 让我无限制地学习。'

常见问题

¹ 本课程的部分作业采用 AI 评分。对于这些作业,将根据 Coursera 隐私声明使用您的数据。