Packt

Accelerate Model Training with PyTorch 2.X

即将结束: 只需 199 美元(原价 399 美元)即可通过 Coursera Plus 学习新技能。立即节省

Packt

Accelerate Model Training with PyTorch 2.X

包含在 Coursera Plus

深入了解一个主题并学习基础知识。
初级 等级

推荐体验

8 小时 完成
灵活的计划
自行安排学习进度
深入了解一个主题并学习基础知识。
初级 等级

推荐体验

8 小时 完成
灵活的计划
自行安排学习进度

您将学到什么

  • Optimize model training using PyTorch and performance tuning techniques.

  • Leverage specialized libraries to enhance CPU-based training.

  • Build efficient data pipelines to improve GPU utilization.

要了解的详细信息

可分享的证书

添加到您的领英档案

最近已更新!

January 2026

作业

11 项作业

授课语言:英语(English)

了解顶级公司的员工如何掌握热门技能

Petrobras, TATA, Danone, Capgemini, P&G 和 L'Oreal 的徽标

该课程共有11个模块

In this section, we explore the training process of neural networks, analyze factors contributing to computational burden, and evaluate elements influencing training time.

涵盖的内容

2个视频3篇阅读材料1个作业

In this section, we explore techniques to accelerate model training by modifying the software stack and scaling resources. Key concepts include vertical and horizontal scaling, application and environment layer optimizations, and practical strategies for improving efficiency.

涵盖的内容

1个视频3篇阅读材料1个作业

In this section, we explore the PyTorch 2.0 Compile API to accelerate deep learning model training, focusing on graph mode benefits, API usage, and workflow components for performance optimization.

涵盖的内容

1个视频3篇阅读材料1个作业

In this section, we explore using OpenMP for multithreading and IPEX to optimize PyTorch on Intel CPUs, enhancing performance through specialized libraries.

涵盖的内容

1个视频3篇阅读材料1个作业

In this section, we explore building efficient data pipelines to prevent training bottlenecks. Key concepts include configuring workers, optimizing GPU memory transfer, and ensuring continuous data flow for ML model training.

涵盖的内容

1个视频2篇阅读材料1个作业

In this section, we explore model simplification through pruning and compression techniques to improve efficiency without sacrificing performance, using the Microsoft NNI toolkit for practical implementation.

涵盖的内容

1个视频3篇阅读材料1个作业

In this section, we explore mixed precision strategies to optimize model training efficiency by reducing computational and memory demands without sacrificing accuracy, focusing on PyTorch implementation and hardware utilization.

涵盖的内容

1个视频3篇阅读材料1个作业

In this section, we explore distributed training principles, parallel strategies, and PyTorch implementation to enhance model training efficiency through resource distribution.

涵盖的内容

1个视频4篇阅读材料1个作业

In this section, we explore distributed training on multiple CPUs, focusing on benefits, implementation, and using Intel oneCCL for efficient communication in resource-constrained environments.

涵盖的内容

1个视频3篇阅读材料1个作业

In this section, we explore multi-GPU training strategies, analyze interconnection topologies, and configure NCCL for efficient distributed deep learning operations.

涵盖的内容

1个视频4篇阅读材料1个作业

In this section, we explore distributed training on computing clusters, focusing on Open MPI and NCCL for efficient communication and resource management across multiple machines.

涵盖的内容

1个视频4篇阅读材料1个作业

位教师

Packt - Course Instructors
Packt
1,365 门课程 357,855 名学生

提供方

Packt

从 Machine Learning 浏览更多内容

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.
自 2018开始学习的学生
''能够按照自己的速度和节奏学习课程是一次很棒的经历。只要符合自己的时间表和心情,我就可以学习。'
Jennifer J.
自 2020开始学习的学生
''我直接将从课程中学到的概念和技能应用到一个令人兴奋的新工作项目中。'
Larry W.
自 2021开始学习的学生
''如果我的大学不提供我需要的主题课程,Coursera 便是最好的去处之一。'
Chaitanya A.
''学习不仅仅是在工作中做的更好:它远不止于此。Coursera 让我无限制地学习。'

常见问题