Master the critical skills needed to secure AI inference endpoints against emerging threats in this comprehensive intermediate-level course. As AI systems become integral to business operations, understanding their unique vulnerabilities is essential for security professionals. You'll learn to identify and evaluate AI-specific attack vectors including prompt injection, model extraction, and data poisoning through hands-on labs and real-world scenarios. Design comprehensive threat models using STRIDE and MITRE ATLAS frameworks specifically adapted for machine learning systems. Create automated security test suites covering unit tests for input validation, integration tests for end-to-end security, and adversarial robustness testing. Implement these security measures within CI/CD pipelines to ensure continuous validation and monitoring. Through practical exercises with Python, GitHub Actions, and monitoring tools, you'll gain experience securing production AI deployments. Perfect for developers, security engineers, and DevOps professionals ready to specialize in the rapidly growing field of AI security.
以 199 美元(原价 399 美元)购买一年 Coursera Plus,享受无限增长。立即节省

您将学到什么
Analyze and evaluate AI inference threat models, identifying attack vectors and vulnerabilities in machine learning systems.
Design and implement comprehensive security test cases for AI systems including unit tests, integration tests, and adversarial robustness testing.
Integrate AI security testing into CI/CD pipelines for continuous security validation and monitoring of production deployments.
您将获得的技能
要了解的详细信息
了解顶级公司的员工如何掌握热门技能

该课程共有3个模块
This module introduces learners to the unique security challenges of AI systems, covering attack surfaces specific to machine learning models and inference endpoints. Learners will explore various threat vectors including prompt injection, model extraction, and data poisoning attacks through hands-on analysis and practical examples.
涵盖的内容
4个视频2篇阅读材料1次同伴评审
This module focuses on designing and implementing comprehensive security test cases for AI endpoints. Learners will create unit tests for input validation, integration tests for end-to-end security, and adversarial tests to evaluate model robustness against real-world attacks.
涵盖的内容
3个视频1篇阅读材料1次同伴评审
This module covers the integration of AI security testing into CI/CD pipelines. Learners will implement automated security checks, set up monitoring systems, and create feedback loops for continuous security improvement in production environments.
涵盖的内容
4个视频1篇阅读材料1个作业2次同伴评审
提供方
人们为什么选择 Coursera 来帮助自己实现职业发展




常见问题
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you enroll in the course, you get access to all of the courses in the Specialization, and you earn a certificate when you complete the work. Your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
更多问题
提供助学金,
¹ 本课程的部分作业采用 AI 评分。对于这些作业,将根据 Coursera 隐私声明使用您的数据。








