This hands-on course equips learners with the skills to design, build, and manage end-to-end ETL (Extract, Transform, Load) workflows using Apache Spark in a real-world data engineering context. Structured into two comprehensive modules, the course begins with foundational setup, guiding learners through the installation of essential components such as PySpark, Hadoop, and MySQL. Participants will learn how to configure their environment, organize project structures, and explore source datasets effectively.
通过 Coursera Plus 提高技能,仅需 239 美元/年(原价 399 美元)。立即节省

您将学到什么
Install and configure PySpark, Hadoop, and MySQL for ETL workflows.
Build Spark applications for full and incremental data loads via JDBC.
Apply transformations, handle deployment issues, and optimize ETL pipelines.
您将获得的技能
要了解的详细信息

添加到您的领英档案
6 项作业
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

从 Data Analysis 浏览更多内容
状态:免费试用
状态:免费试用
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
学生评论
- 5 stars
50%
- 4 stars
36.36%
- 3 stars
9.09%
- 2 stars
0%
- 1 star
4.54%
显示 3/22 个
已于 Jan 19, 2026审阅
Learners feel they actually build powerful pipelines — from raw ingestion to analytics-ready outputs, not just toy examples.
已于 Jan 31, 2026审阅
Great mix of theory and hands-on labs. I now feel comfortable using DataFrames, Spark SQL, and basic optimization techniques.
已于 Apr 9, 2026审阅
Comprehensive Spark ETL course with practical MySQL integration. Covers transformations, incremental loads, and real deployment challenges effectively for beginners.






