Macquarie University
Cyber Security: Security of AI
Macquarie University

Cyber Security: Security of AI

本课程是多个项目的一部分。

Matt Bushby

位教师:Matt Bushby

包含在 Coursera Plus

深入了解一个主题并学习基础知识。
初级 等级

推荐体验

1 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度
深入了解一个主题并学习基础知识。
初级 等级

推荐体验

1 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度

您将学到什么

  • Identify emerging threats targeting AI systems and applications.

  • Apply defences to protect AI from adversarial attacks and model leakage.

  • Evaluate AI security controls, testing methods, and trade-offs.

  • Understand regulation, responsible AI principles, and future risks.

要了解的详细信息

可分享的证书

添加到您的领英档案

最近已更新!

July 2025

作业

12 项作业

授课语言:英语(English)

了解顶级公司的员工如何掌握热门技能

Petrobras, TATA, Danone, Capgemini, P&G 和 L'Oreal 的徽标

积累特定领域的专业知识

此课程作为 的一部分提供
在注册此课程时,您还需要选择一个特定的合作项目。
  • 向行业专家学习新概念
  • 获得对主题或工具的基础理解
  • 通过实践项目培养工作相关技能
  • 获得可共享的职业证书

该课程共有6个模块

Artificial Intelligence (AI) is revolutionising industries across the globe, but it’s also introducing a rapidly evolving set of cybersecurity threats. As AI systems become more complex and deeply embedded in everyday operations, understanding their foundational principles and emergent risks is essential. In this topic, you’ll explore the fundamentals of AI, what it is, how it works, and how it’s being applied across sectors. You’ll learn the difference between engineering-driven AI systems and deep learning models, and how each introduces unique security considerations. From there, we shift focus to the new and emerging threat landscape: adversarial AI, model manipulation, deepfakes, AI-driven scams, and the weaponisation of AI for misinformation. You’ll build an essential foundation in both traditional security frameworks and AI-specific risks, setting the stage for deeper exploration of securing AI applications throughout the rest of the course. Get ready to explore the frontline of AI security challenges, and understand the urgency of building trusted, robust, and defensible AI systems.

涵盖的内容

2个作业8个插件

As AI becomes increasingly integrated into critical infrastructure and industrial systems, it brings with it new layers of complexity, and new avenues for attack. In this topic, you’ll explore how Artificial Intelligence is reshaping the security landscape of Industrial Control Systems (ICS) and Operational Technology (OT), and what this means for defenders working in high-risk, high-impact environments. We begin by examining how AI is applied in ICS and OT, enhancing operational efficiency, automation, and predictive maintenance. But with innovation comes risk: AI introduces novel vulnerabilities, from AI-driven manipulation of cyber-physical systems to emerging attack vectors in critical infrastructure such as energy grids and manufacturing lines. Through real-world case studies, you’ll investigate how adversaries exploit AI in industrial environments and how traditional OpSec and DevSecOps practices must be adapted to secure AI-enabled deployments. You'll also learn how to identify sensitive components within AI pipelines and apply context-specific defences based on sector, whether in military-grade applications, industrial settings, or consumer products. AI is powering the future of industry. Here, you’ll learn how to defend it.

涵盖的内容

2个作业6个插件

As AI systems transition from experimental models to real-world deployment, their exposure to adversarial threats and misuse increases dramatically. In this topic, we’ll explore how AI is being attacked and exploited in practice, and why securing these systems is now a critical focus for cyber professionals. You’ll dive into the mechanics of AI-specific attack vectors such as model poisoning, information leakage, model stealing, and backdoor exploits. These threats not only compromise the performance of AI models, but also pose serious risks to data privacy, intellectual property, and user safety. We’ll also examine the implications of harmful AI outputs, whether they arise from poorly aligned models, biased training data, or deliberate manipulation. You’ll learn how challenges such as output alignment, ethical censorship, and AI-powered surveillance affect both public trust and legal compliance. By analysing real-world case studies and scenarios, this topic will sharpen your ability to identify vulnerabilities in AI systems and understand the broader societal consequences of insecure deployments. AI is already shaping the world, this topic helps ensure it does so securely and responsibly.

涵盖的内容

2个作业6个插件

As AI systems become more powerful and integrated into critical operations, defending them against emerging threats is no longer optional, it’s mission-critical. In this topic, you’ll explore the technical controls and testing strategies used to secure AI models and protect them from compromise. You’ll learn how to apply AI-specific defences, from secure algorithm design to privacy-preserving techniques like differential privacy. You’ll also examine how to test and validate the robustness of AI models using red, purple, and blue teaming approaches. With a focus on balancing security, utility, and performance, this topic empowers you to make informed trade-offs in high-stakes environments. Whether you’re building or auditing AI systems, you’ll gain the practical skills needed to implement trusted controls and rigorously test for resilience against real-world threats.

涵盖的内容

2个作业8个插件

As AI systems grow in influence and complexity, so too does the imperative to ensure they are designed, deployed, and governed responsibly. This topic introduces the foundational principles of Responsible AI, covering fairness, bias mitigation, transparency, and ethical accountability. You’ll explore how AI decisions can impact individuals and communities, and how to navigate trade-offs between user privacy, model performance, and transparency. Key challenges such as data sourcing, labelling, and the ethical implications of large-scale models will be unpacked, alongside practical strategies for enhancing trust in AI systems. We’ll also dive into global frameworks, policies, and governance models that support secure and ethical AI adoption, equipping you with the knowledge to ensure AI systems are not only functional, but fair, transparent, and aligned with regulatory expectations.

涵盖的内容

2个作业6个插件

AI is evolving rapidly, and with it, the scope and complexity of its security challenges. In this final topic, we turn our attention to the road ahead: examining how emerging applications and architectures will shape the next frontier of AI security. You’ll explore speculative but increasingly plausible uses of AI in sectors like healthcare, autonomous vehicles, and programming, unpacking the unique risks each use case presents. We’ll also introduce Artificial General Intelligence (AGI), examining its transformative potential alongside the profound security and ethical implications it may carry. From lightweight AI models for constrained devices to philosophical perspectives on security trade-offs, this topic encourages you to think critically and proactively. The goal: to equip you with the insight and foresight needed to anticipate future risks, influence responsible innovation, and contribute to the safe evolution of intelligent systems.

涵盖的内容

1篇阅读材料2个作业7个插件

获得职业证书

将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。

位教师

Matt Bushby
Macquarie University
15 门课程7,270 名学生

提供方

从 Computer Security and Networks 浏览更多内容

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.
自 2018开始学习的学生
''能够按照自己的速度和节奏学习课程是一次很棒的经历。只要符合自己的时间表和心情,我就可以学习。'
Jennifer J.
自 2020开始学习的学生
''我直接将从课程中学到的概念和技能应用到一个令人兴奋的新工作项目中。'
Larry W.
自 2021开始学习的学生
''如果我的大学不提供我需要的主题课程,Coursera 便是最好的去处之一。'
Chaitanya A.
''学习不仅仅是在工作中做的更好:它远不止于此。Coursera 让我无限制地学习。'
Coursera Plus

通过 Coursera Plus 开启新生涯

无限制访问 10,000+ 世界一流的课程、实践项目和就业就绪证书课程 - 所有这些都包含在您的订阅中

通过在线学位推动您的职业生涯

获取世界一流大学的学位 - 100% 在线

加入超过 3400 家选择 Coursera for Business 的全球公司

提升员工的技能,使其在数字经济中脱颖而出

常见问题