Northeastern University
Data Warehousing and Integration Part 2
Northeastern University

Data Warehousing and Integration Part 2

包含在 Coursera Plus

深入了解一个主题并学习基础知识。
1 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度
深入了解一个主题并学习基础知识。
1 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度

了解顶级公司的员工如何掌握热门技能

Petrobras, TATA, Danone, Capgemini, P&G 和 L'Oreal 的徽标

该课程共有6个模块

In this module, you'll learn about ETL (Extract, Transform, Load) processes, an essential part of Data Warehousing and Data Integration solutions. ETL processes can be complex and costly, but effective design and modeling can significantly reduce development and maintenance costs. You'll be introduced to the basics of Business Process Modeling Notation (BPMN), which is crucial for modeling business processes. We’ll focus on the basics of BPMN, including key components such as flow objects, gateways, events, and artifacts, which are essential for modeling business processes. You will explore how BPMN can be customized to conceptual modeling of ETL tasks, with a particular focus on differentiating control tasks from data tasks. Control tasks manage the orchestration of ETL processes, while data tasks handle data manipulation, both of which are critical in conceptualizing ETL workflows. By the end of this module, you’ll gain a solid understanding of how to design ETL processes using BPMN, enabling greater flexibility and adaptability across various tools.

涵盖的内容

2个视频8篇阅读材料2个作业

In this module you will dive into Talend Studio, a powerful Eclipse-based data integration platform that transforms complex ETL operations into intuitive visual workflows. By explorating Talend's drag-and-drop interface, you will learn to navigate the core components of the platform. You’ll master fundamental ETL operations by studying essential components like tMap for complex data transformations and joins, tJoin for straightforward data linking, and various input/output components for connecting to databases, files, and APIs. By the end of the module you will understand how Talend automatically generates executable Java code from visual designs, enabling you to create scalable, production-ready data integration solutions that can handle both batch processing and real-time data scenarios across diverse technological environments.

涵盖的内容

3篇阅读材料1个作业

In this module, we transition from on-premises Data Warehousing to Data Engineering. While Data Engineering has its roots in Data Warehousing, it encompasses much more. We’ll explore the key enablers of this evolution, specifically cloud computing and DevOps. You will learn about the benefits of cloud development, including enhanced scalability, cost efficiency, and flexibility in data operations. We will also dive into how traditional IT infrastructure components—such as security, networking, and compute resources—are redefined in cloud environments using AWS. Additionally, you'll gain an understanding of DevOps in the cloud, focusing on the use of virtual machines and containers to streamline continuous integration and deployment. We will cover key DevOps practices like Infrastructure as Code (IaC), CI/CD pipelines, and automated testing, emphasizing their role in ensuring consistency, faster development cycles, and secure applications. You will then explore what data engineering entails and the skills required to become a data engineer. Finally, we’ll introduce the concept of the data engineering lifecycle and its different phases, focusing on the first two: Data Generation and Storage.

涵盖的内容

1个视频12篇阅读材料2个作业

In this module, we will explore the next two phases of the data engineering lifecycle: Ingestion and Transformation. Data ingestion refers to the process of moving data from source systems into storage, making it available for processing and analysis. As you delve into the reading, you will examine key ingestion patterns, including batch versus streaming ingestion, synchronous versus asynchronous methods, and push, pull, and hybrid approaches. You’ll also explore essential engineering considerations such as scalability, reliability, and data quality management, along with the challenges posed by schema changes. The reading will introduce various technologies that enable data ingestion, such as JDBC/ODBC, Change Data Capture (CDC), APIs, and event-streaming platforms like Kafka. We then shift focus to the transformation phase of the lifecycle, exploring different types of transformations that integrate complex business logic into data pipelines. At the end of the module, we will focus on data architecture and implementing good architecture principles to build scalable and reliable data pipelines.

涵盖的内容

4个视频12篇阅读材料2个作业2个应用程序项目

In this module, we will explore data characteristics and how they drive infrastructure decisions. In today’s data-driven world, understanding the properties of your data is essential for designing robust data pipelines. We’ll go over key characteristics like volume, which refers to the size of datasets, and velocity, which concerns how frequently new data is generated. We’ll also take a look at variety, which focuses on data formats and sources, and veracity, which emphasizes data accuracy and trustworthiness. The ultimate goal is to uncover value from data through insightful analysis. As we delve into pipeline design, you'll learn how these characteristics influence key decisions, such as the choice of storage, processing, and analytics tools. We will also cover essential AWS services like Amazon S3, Glue, and Athena, exploring how they support scalable and flexible data engineering. By the end of this module, you’ll have a comprehensive understanding of how to build effective data solutions to meet both technical and business needs.

涵盖的内容

6篇阅读材料1个作业

Welcome to the final stage of the data engineering lifecycle: serving data. In this module, we will focus on how to effectively serve data for analytics, machine learning (ML), and reverse ETL to ensure that the data products you design are reliable, actionable, and trusted by stakeholders. Key topics include setting SLAs, identifying use cases, evolving data products with feedback, standardizing data definitions, and exploring delivery methods such as file exchanges, databases, and streaming systems. We’ll also cover the use of reverse ETL to improve business processes and discuss the importance of context for choosing the best visualization type and tools. We then delve into KPIs and metrics and how to classify them, including how to identify robust KPIs based on the business context. Finally, we will focus on creating intuitive dashboards by choosing the right analysis, visualizations, and metrics to showcase based on the business context and audience involved. By the end of this module, you will understand how to design and serve data solutions that drive meaningful action and are trusted by end users.

涵盖的内容

11篇阅读材料1个作业

位教师

Venkat Krishnamurthy
Northeastern University
3 门课程376 名学生

提供方

从 Data Analysis 浏览更多内容

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.
自 2018开始学习的学生
''能够按照自己的速度和节奏学习课程是一次很棒的经历。只要符合自己的时间表和心情,我就可以学习。'
Jennifer J.
自 2020开始学习的学生
''我直接将从课程中学到的概念和技能应用到一个令人兴奋的新工作项目中。'
Larry W.
自 2021开始学习的学生
''如果我的大学不提供我需要的主题课程,Coursera 便是最好的去处之一。'
Chaitanya A.
''学习不仅仅是在工作中做的更好:它远不止于此。Coursera 让我无限制地学习。'
Coursera Plus

通过 Coursera Plus 开启新生涯

无限制访问 10,000+ 世界一流的课程、实践项目和就业就绪证书课程 - 所有这些都包含在您的订阅中

通过在线学位推动您的职业生涯

获取世界一流大学的学位 - 100% 在线

加入超过 3400 家选择 Coursera for Business 的全球公司

提升员工的技能,使其在数字经济中脱颖而出

常见问题