Building Batch Data Pipelines on GCP

Go to class
Write Review

Free Online Course: Building Batch Data Pipelines on GCP provided by Coursera is a comprehensive online course, which lasts for 6 weeks long, 17 hours worth of material. The course is taught in English and is free of charge. Upon completion of the course, you can receive an e-certificate from Coursera. Building Batch Data Pipelines on GCP is taught by Google Cloud Training.

Overview
  • Data pipelines typically fall under one of the Extra-Load, Extract-Load-Transform or Extract-Transform-Load paradigms. This course describes which paradigm should be used and when for batch data. Furthermore, this course covers several technologies on Google Cloud for data transformation including BigQuery, executing Spark on Dataproc, pipeline graphs in Cloud Data Fusion and serverless data processing with Dataflow. Learners will get hands-on experience building data pipeline components on Google Cloud using Qwiklabs.

Syllabus
    • Introduction
      • In this module, we introduce the course and agenda
    • Introduction to Building Batch Data Pipelines
      • This module reviews different methods of data loading: EL, ELT and ETL and when to use what
    • Executing Spark on Dataproc
      • This module shows how to run Hadoop on Dataproc, how to leverage Cloud Storage, and how to optimize your Dataproc jobs.
    • Serverless Data Processing with Dataflow
      • This module covers using Dataflow to build your data processing pipelines
    • Manage Data Pipelines with Cloud Data Fusion and Cloud Composer
      • This module shows how to manage data pipelines with Cloud Data Fusion and Cloud Composer.
    • Course Summary
      • Course Summary