A Big Data Hadoop and Spark project for absolute beginners

Go to class
Write Review

A Big Data Hadoop and Spark project for absolute beginners provided by Udemy is a comprehensive online course, which lasts for 10 hours worth of material. A Big Data Hadoop and Spark project for absolute beginners is taught by FutureX Skill. Upon completion of the course, you can receive an e-certificate from Udemy. The course is taught in Englishand is Paid Course. Visit the course page at Udemy for detailed price information.

Overview
  • Data Engineering, Spark, Hive, Python, PySpark, Scala, Coding framework, Testing, IntelliJ, Maven, Glue, Streaming,

    What you'll learn:

    • Big Data , Hadoop and Spark from scratch by solving a real world use case using Python and Scala
    • Spark Scala & PySpark real world coding framework.
    • Real world coding best practices, logging, error handling , configuration management using both Scala and Python.
    • Serverless big data solution using AWS Glue, Athena and S3

    This course will prepare you for a real world Data Engineer role !

    Get started with Big Data quickly leveraging free cloud cluster and solving a real world use case! Learn Hadoop, Hive , Spark (both Python and Scala) from scratch!

    Learn to code Spark Scala & PySpark like a real world developer. Understand real world coding best practices, logging, error handling , configuration management using both Scala and Python.


    Project

    A bank is launching a new credit card and wants to identify prospects it can target in its marketing campaign.

    It has received prospect data from various internal and 3rd party sources. The data has various issues such as missing or unknown values in certain fields. The data needs to be cleansed before any kind of analysis can be done.

    Since the data is in huge volume with billions of records, the bank has asked you to use Big Data Hadoop and Spark technology to cleanse, transform and analyze this data.

    What you will learn :

    • Big Data, Hadoop concepts

    • How to create a free Hadoop and Spark cluster using Google Dataproc

    • Hadoop hands-on - HDFS, Hive

    • Python basics

    • PySpark RDD - hands-on

    • PySpark SQL, DataFrame - hands-on

    • Project work using PySpark and Hive

    • Scala basics

    • Spark Scala DataFrame

    • Project work using Spark Scala

    • Spark Scala Real world coding framework and development using Winutil, Maven and IntelliJ.

    • Python Spark Hadoop Hive coding framework and development using PyCharm

    • Building a data pipeline using Hive , PostgreSQL, Spark

    • Logging , error handling and unit testing of PySpark and Spark Scala applications

    • Spark Scala Structured Streaming

    • Applying spark transformation on data stored in AWS S3 using Glue and viewing data using Athena


    Prerequisites :

    • Some basic programming skills

    • Some knowledge of SQL queries