This course teaches you techniques to dramatically speed up model training using the latest features in PyTorch 2.X. Mastering these optimization strategies is essential for professionals building scalable, high-performance AI systems.
即将结束: 只需 199 美元(原价 399 美元)即可通过 Coursera Plus 学习新技能。立即节省

您将学到什么
Optimize model training using PyTorch and performance tuning techniques.
Leverage specialized libraries to enhance CPU-based training.
Build efficient data pipelines to improve GPU utilization.
您将获得的技能
要了解的详细信息

添加到您的领英档案
January 2026
11 项作业
了解顶级公司的员工如何掌握热门技能

该课程共有11个模块
In this section, we explore the training process of neural networks, analyze factors contributing to computational burden, and evaluate elements influencing training time.
涵盖的内容
2个视频3篇阅读材料1个作业
In this section, we explore techniques to accelerate model training by modifying the software stack and scaling resources. Key concepts include vertical and horizontal scaling, application and environment layer optimizations, and practical strategies for improving efficiency.
涵盖的内容
1个视频3篇阅读材料1个作业
In this section, we explore the PyTorch 2.0 Compile API to accelerate deep learning model training, focusing on graph mode benefits, API usage, and workflow components for performance optimization.
涵盖的内容
1个视频3篇阅读材料1个作业
In this section, we explore using OpenMP for multithreading and IPEX to optimize PyTorch on Intel CPUs, enhancing performance through specialized libraries.
涵盖的内容
1个视频3篇阅读材料1个作业
In this section, we explore building efficient data pipelines to prevent training bottlenecks. Key concepts include configuring workers, optimizing GPU memory transfer, and ensuring continuous data flow for ML model training.
涵盖的内容
1个视频2篇阅读材料1个作业
In this section, we explore model simplification through pruning and compression techniques to improve efficiency without sacrificing performance, using the Microsoft NNI toolkit for practical implementation.
涵盖的内容
1个视频3篇阅读材料1个作业
In this section, we explore mixed precision strategies to optimize model training efficiency by reducing computational and memory demands without sacrificing accuracy, focusing on PyTorch implementation and hardware utilization.
涵盖的内容
1个视频3篇阅读材料1个作业
In this section, we explore distributed training principles, parallel strategies, and PyTorch implementation to enhance model training efficiency through resource distribution.
涵盖的内容
1个视频4篇阅读材料1个作业
In this section, we explore distributed training on multiple CPUs, focusing on benefits, implementation, and using Intel oneCCL for efficient communication in resource-constrained environments.
涵盖的内容
1个视频3篇阅读材料1个作业
In this section, we explore multi-GPU training strategies, analyze interconnection topologies, and configure NCCL for efficient distributed deep learning operations.
涵盖的内容
1个视频4篇阅读材料1个作业
In this section, we explore distributed training on computing clusters, focusing on Open MPI and NCCL for efficient communication and resource management across multiple machines.
涵盖的内容
1个视频4篇阅读材料1个作业
位教师

提供方
从 Machine Learning 浏览更多内容
状态:免费试用
状态:免费试用DeepLearning.AI
状态:免费试用DeepLearning.AI
状态:免费试用
人们为什么选择 Coursera 来帮助自己实现职业发展




常见问题
Yes, you can preview the first video and view the syllabus before you enroll. You must purchase the course to access content not included in the preview.
If you decide to enroll in the course before the session start date, you will have access to all of the lecture videos and readings for the course. You’ll be able to submit assignments once the session starts.
Once you enroll and your session begins, you will have access to all videos and other resources, including reading items and the course discussion forum. You’ll be able to view and submit practice assessments, and complete required graded assignments to earn a grade and a Course Certificate.
更多问题
提供助学金,





