Ever wonder if your smart AI is actually secure? In this course, we'll ditch the dry theory to show you how to build genuinely resilient AI systems from the ground up, making security a core part of your design, not just an afterthought. You'll begin by stepping into the role of an AI Security Architect, running a “pre-mortem” to think like an attacker and neutralize threats before they even happen. Through focused videos and exercises, you’ll master essential defenses like blocking bad data with input sanitization, ‘vaccinating’ your model against attacks with adversarial training, and protecting user data with differential privacy. This all culminates in a hands-on lab where you'll personally fix a vulnerable model and prove its new resilience. The main goal is to shift your mindset from reactive patching to proactive design, so you’ll walk away with the real-world skills to analyze defense strategies, successfully harden a model in a lab, and design a comprehensive security plan for any new AI project.
以 199 美元(原价 399 美元)购买一年 Coursera Plus,享受无限增长。立即节省

您将学到什么
Analyze and identify a range of security vulnerabilities in complex AI models, including evasion, data poisoning, and model extraction attacks.
Apply defense mechanisms like adversarial training and differential privacy to protect AI systems from known threats.
Evaluate the effectiveness of security measures by designing and executing simulated adversarial attacks to test the resilience of defended AI model.
您将获得的技能
要了解的详细信息

添加到您的领英档案
1 项作业
了解顶级公司的员工如何掌握热门技能

该课程共有3个模块
This module introduces the fundamental concept that AI models are attack surfaces. You will learn to think like an adversary, exploring the primary categories of attacks—evasion, data poisoning, and model extraction—and see how they exploit model weaknesses with real-world examples.
涵盖的内容
6篇阅读材料
Moving from offense to defense, this module focuses on building security directly into your AI systems. You will learn to implement and configure robust, proactive defense mechanisms like adversarial training, input sanitization, and differential privacy to create models that are resilient by design.
涵盖的内容
6篇阅读材料
A defense is only effective if it's tested. In this final module, you will master the art of AI "Red Teaming" by designing and executing simulated attacks to validate your security measures. You will learn to evaluate model resilience and embrace the continuous security lifecycle required to stay ahead of emerging threats.
涵盖的内容
8篇阅读材料1个作业
提供方
人们为什么选择 Coursera 来帮助自己实现职业发展




常见问题
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you enroll in the course, you get access to all of the courses in the Specialization, and you earn a certificate when you complete the work. Your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
更多问题
提供助学金,







