In this comprehensive course, you will explore the intricate world of Large Language Models (LLMs) and gain the skills to design, train, and deploy them using cutting-edge MLOps practices. LLMs are revolutionizing the AI landscape, and understanding how to develop and manage them is essential for AI professionals.
通过 Coursera Plus 提高技能,仅需 239 美元/年(原价 399 美元)。立即节省

推荐体验
推荐体验
初级
Ideal for AI engineers, NLP professionals, and LLM specialists. Basic knowledge of Python and cloud platforms like AWS is recommended.
推荐体验
推荐体验
初级
Ideal for AI engineers, NLP professionals, and LLM specialists. Basic knowledge of Python and cloud platforms like AWS is recommended.
您将学到什么
Design and manage effective LLM training and deployment pipelines.
Implement supervised fine-tuning and evaluate LLM performance.
Deploy scalable, end-to-end LLM applications using cloud tools.
您将获得的技能
要了解的详细信息

添加到您的领英档案
November 2025
11 项作业
了解顶级公司的员工如何掌握热门技能

该课程共有11个模块
In this section, we delve into the concept and architecture of LLM Twin, an innovative AI model mimicking a person's writing style and personality. We discuss its significance, benefits over generic chatbots, and the planning process for creating an effective LLM product. Detailed insights into the design of the feature, training, and inference pipelines are explored to structure a robust ML system.
涵盖的内容
2个视频3篇阅读材料1个作业
2个视频•总计2分钟
- Course Overview•1分钟
- Understanding the LLM Twin Concept and Architecture - Overview Video•1分钟
3篇阅读材料•总计30分钟
- Introduction•10分钟
- Building ML systems with feature/training/inference pipelines•10分钟
- Designing the System Architecture of the LLM Twin•10分钟
1个作业•总计10分钟
- Designing an LLM-Based System•10分钟
In this section, we introduce the essential tools needed for the course, particularly for the LLM Twin project. We provide an overview of the tech stack, cover installation procedures for Python and its ecosystem, dependency management with Poetry, and task execution using Poe the Poet. This section also provides insights into MLOps and LLMOps tooling, including ZenML and Hugging Face, and explains their roles in the project. Finally, we guide users in setting up an AWS account, focusing on SageMaker for deploying ML models.
涵盖的内容
1个视频2篇阅读材料1个作业
1个视频•总计1分钟
- Tooling and Installation - Overview Video•1分钟
2篇阅读材料•总计40分钟
- Introduction•10分钟
- ZenML: Orchestrator, Artifacts, and Metadata•30分钟
1个作业•总计10分钟
- MLOps and LLMOps Concepts•10分钟
In this section, we delve into the LLM Twin project by designing a data collection pipeline for gathering raw data essential for LLM use cases, such as fine-tuning and inference. We'll focus on implementing an ETL pipeline that aggregates data from platforms like Medium and GitHub into a MongoDB data warehouse, thus simulating real-world machine learning project scenarios.
涵盖的内容
1个视频4篇阅读材料1个作业
1个视频•总计1分钟
- Data Engineering - Overview Video•1分钟
4篇阅读材料•总计100分钟
- Introduction•10分钟
- ZenML Pipeline and Steps•30分钟
- The Crawlers•30分钟
- The ORM and ODM Software Patterns•30分钟
1个作业•总计10分钟
- Designing Data Collection Pipelines•10分钟
In this section, we explore the Retrieval-augmented Generation (RAG) feature pipeline, a crucial technique for embedding custom data into large language models without constant fine-tuning. We introduce the fundamental components of a naive RAG system, such as chunking, embedding, and vector databases. We also delve into LLM Twin's RAG feature pipeline architecture, applying theoretical concepts through practical implementation, and discuss the importance of RAG for addressing issues like model hallucinations and old data. This section provides in-depth insights into advanced RAG techniques and the role of batch pipelines in syncing data for improved accuracy.
涵盖的内容
1个视频7篇阅读材料1个作业
1个视频•总计1分钟
- RAG Feature Pipeline - Overview Video•1分钟
7篇阅读材料•总计170分钟
- Introduction•10分钟
- What are Embeddings?•30分钟
- DB Operations•10分钟
- Exploring the LLM Twin’s RAG Feature Pipeline Architecture•30分钟
- Change data capture: syncing the data warehouse and feature store•30分钟
- Querying the Data Warehouse•30分钟
- OVM•30分钟
1个作业•总计10分钟
- Advanced Concepts in Retrieval-Augmented Generation (RAG)•10分钟
In this section, we will explore the process of Supervised Fine-Tuning (SFT) for Large Language Models (LLMs). We'll delve into the creation of instruction datasets and how they are used to refine LLMs for specific tasks. This section covers the steps involved in crafting these datasets, the importance of data quality, and presents various techniques and strategies for enhancing the fine-tuning process. Our focus will be on transforming general-purpose models into specialized assistants through SFT, enabling them to provide more coherent and relevant responses.
涵盖的内容
1个视频7篇阅读材料1个作业
1个视频•总计1分钟
- Supervised Fine-Tuning - Overview Video•1分钟
7篇阅读材料•总计150分钟
- Introduction•10分钟
- Data Deduplication•30分钟
- Data Generation•10分钟
- Creating Our Own Instruction Dataset•30分钟
- Exploring SFT and its Techniques•30分钟
- Training Parameters•10分钟
- Fine-tuning in Practice•30分钟
1个作业•总计10分钟
- Advanced Techniques in Language Model Fine-Tuning•10分钟
In this section, we delve into the realms of preference alignment, discussing how Direct Preference Optimization (DPO) can fine-tune language models to better align with human preferences. We elaborate on creating and evaluating preference datasets, ensuring our models capture nuanced human interactions.
涵盖的内容
1个视频4篇阅读材料1个作业
1个视频•总计1分钟
- Fine-Tuning with Preference Alignment - Overview Video•1分钟
4篇阅读材料•总计80分钟
- Introduction•10分钟
- Evaluating Preferences•30分钟
- Preference Alignment•10分钟
- Implementing DPO•30分钟
1个作业•总计10分钟
- Understanding Preference Alignment in AI Systems•10分钟
In this section, we delve into the evaluation of large language models (LLMs), addressing various evaluation methods and their significance. We cover general-purpose, domain-specific, and task-specific evaluations, highlighting the unique challenges each presents. Additionally, we explore retrieval-augmented generation (RAG) pipelines and introduce tools like Ragas and ARES for comprehensive LLM assessment.
涵盖的内容
1个视频3篇阅读材料1个作业
1个视频•总计1分钟
- Evaluating LLMs - Overview Video•1分钟
3篇阅读材料•总计70分钟
- Introduction•10分钟
- Task-specific LLM Evaluations•30分钟
- Generating answers•30分钟
1个作业•总计10分钟
- Advanced Evaluation Techniques for LLM Systems•10分钟
In this section, we dive into the art of fine-tuning large language models to boost their performance and efficiency. We'll explore key strategies to optimize the inference process of these models, a crucial step given their heavy computational and memory demands. From reducing latency to improving throughput and minimizing memory usage, we examine how to deploy specialized hardware and innovative techniques to enhance model output. By learning these optimization secrets, you'll unlock more efficient deployments, be they for fast-response tasks like code completion or document generation in batches.
涵盖的内容
1个视频3篇阅读材料1个作业
1个视频•总计1分钟
- Inference Optimization - Overview Video•1分钟
3篇阅读材料•总计90分钟
- Introduction•30分钟
- Optimized Attention Mechanisms•30分钟
- Introduction to Quantization•30分钟
1个作业•总计10分钟
- Optimizing Large Language Model Inference•10分钟
In this section, we explore the construction and implementation of a RAG inference pipeline, starting from understanding its architecture to implementing key modules such as retrieval, prompt creation, and interaction with the LLM. We introduce methods for optimizing retrieval processes like query expansion and self-querying while utilizing OpenAI's API, and integrate these techniques into a comprehensive retrieval module. We'll conclude by assembling these elements into a cohesive inference pipeline and preparing for further deployment steps.
涵盖的内容
1个视频5篇阅读材料1个作业
1个视频•总计1分钟
- RAG Inference Pipeline - Overview Video•1分钟
5篇阅读材料•总计130分钟
- Introduction•30分钟
- Self-querying•30分钟
- Advanced RAG Post-retrieval Optimization: Reranking•10分钟
- Implementing the LLM Twin's RAG Inference Pipeline•30分钟
- Bringing Everything Together into the RAG Inference Pipeline•30分钟
1个作业•总计10分钟
- Advanced RAG Pipeline Implementation•10分钟
In this section, we focus on deploying the inference pipeline for large language models (LLMs) in ML applications, ensuring models are accessible and efficient for end users. We'll cover deployment strategies, architectural decisions, and optimization techniques to address challenges like computing power and feature access.
涵盖的内容
1个视频5篇阅读材料1个作业
1个视频•总计1分钟
- Inference Pipeline Deployment - Overview Video•1分钟
5篇阅读材料•总计110分钟
- Introduction•10分钟
- Monolithic versus Microservices Architecture in Model Serving•10分钟
- Exploring the LLM Twin’s Inference Pipeline Deployment Strategy•30分钟
- Deploying the LLM Twin model to AWS SageMaker•30分钟
- Calling the AWS SageMaker Inference Endpoint•30分钟
1个作业•总计10分钟
- Modern ML Model Deployment•10分钟
In this section, we dive into the intricacies of MLOps and LLMOps, exploring their roles in automating machine learning processes and handling large language models. We will cover their origins in DevOps, highlight the unique challenges LLMOps addresses, such as prompt management and scaling issues, and illustrate the practical steps for deploying these systems efficiently. The section also includes discussions on the transition from manual deployment to cloud-based solutions, emphasizing the advantages of CI/CD pipelines and Dockerization in executing and managing models at scale.
涵盖的内容
1个视频7篇阅读材料1个作业
1个视频•总计1分钟
- MLOps and LLMOps - Overview Video•1分钟
7篇阅读材料•总计210分钟
- Introduction•30分钟
- MLOps Principles•30分钟
- Prompt Monitoring•30分钟
- Setting up the ZenML Cloud•30分钟
- Run the Pipelines on AWS•30分钟
- GitHub Actions CI YAML File•30分钟
- Trigger downstream pipelines•30分钟
1个作业•总计10分钟
- MLOps and LLMOps Fundamentals•10分钟
位教师

提供方

提供方

Packt helps tech professionals put software to work by distilling and sharing the working knowledge of their peers. Packt is an established global technical learning content provider, founded in Birmingham, UK, with over twenty years of experience delivering premium, rich content from groundbreaking authors on a wide range of emerging and popular technologies.
从 Data Analysis 浏览更多内容

课程
状态:免费试用免费试用课程
状态:免费试用免费试用课程
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
常见问题
Yes, you can preview the first video and view the syllabus before you enroll. You must purchase the course to access content not included in the preview.
If you decide to enroll in the course before the session start date, you will have access to all of the lecture videos and readings for the course. You’ll be able to submit assignments once the session starts.
Once you enroll and your session begins, you will have access to all videos and other resources, including reading items and the course discussion forum. You’ll be able to view and submit practice assessments, and complete required graded assignments to earn a grade and a Course Certificate.
If you complete the course successfully, your electronic Course Certificate will be added to your Accomplishments page - from there, you can print your Course Certificate or add it to your LinkedIn profile.
This course is currently available only to learners who have paid or received financial aid, when available.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
更多问题
提供助学金,


