Data Analyst Training
This four-day course teaches Data Analysts how to analyse massive amounts of data available in a Hadoop YARN cluster.
Training outcome
Participants will gain the ability to effectively work with the huge datasets stored in Hadoop clusters as well an understanding of which processing and analytics needs are best addressed by the individual Big Data frameworks. After the training, participants will be able to independently import data to a Hadoop cluster, store it in Hive tables, use Hive and Spark to transform and analyse the data and Kibana to visualise it.
Course agenda*
Day 1
Introduction to Big Data and Apache Hadoop
Description of the StreamRock along with all its opportunities and challenges that come from the Big Data technologies.
- Hands-on exercise: Accessing a remote multi-node Hadoop cluster.
Introduction to HDFS
- Hands-on exercise: Importing structured data into the cluster using HUE
Introduction to YARN
- Hands-on exercise: Familiarisation with YARN Web UI
A short overview of MapReduce
- Hands-on exercise: Submitting an example ETL map-reduce job to YARN cluster
Day 2
Providing data-driven answers to business questions using SQL-like solution
Introduction to Apache Hive
- Hands-on exercise: Creating Hive databases and tables using HUE
- Hands-on exercise: Ad-hoc analysis of structured data with HiveQL
Advanced aspects of Hive e.g. partitioning, bucketing, strict-mode, execution plan
- Hands-on exercise: Hive partitioning
Extending Hive with custom UDFs and SerDes
- Hands-on exercise: Using custom Java UDF and SerDe for JSON
Hadoop File Formats (Avro, Parquet, ORC)
- Hands-on exercise: Interacting with Parquet and Avro in Hive
Day 3
Interactive analysis with Apache Spark
Introduction to Apache Spark
- Spark Core and its advantages over map-reduce
- Basics of working with Spark API
- Spark architecture and integration with YARN
- Hands-on exercise: Interacting with Spark Core API
Doing data analysis with SparkSQL
- Introduction to SparkSQL and DataFrames
- Integration with Hive and other tools
- Introduction to Spark notebooks
- Hands-on exercise: Implementing SparkSQL application to clean the dataset with song records
- Hands-on exercise: Data analysis with SparkSQL and Jupyter
Day 4
Visualisation and Search
Introduction to ElasticSearch and Kibana
- Most important features of ElasticSearch
- Hands-on exercise: Indexing data with ElasticSearch and visualisations in Kibana
Advanced aspects of working with Spark notebooks
- Hands-on exercise: Visualisation and publishing data with Zeppelin or Jupyter
Contact person
Testimonials
Other Big Data Training
Machine Learning Operations Training (MLOps)
This four-day course will teach you how to operationalize Machine Learning models using popular open-source tools, like Kedro and Kubeflow, and deploy it using cloud computing.Hadoop Administrator Training
This four-day course provides the practical and theoretical knowledge necessary to operate a Hadoop cluster. We put great emphasis on practical hands-on exercises that aim to prepare participants to work as effective Hadoop administrators.Advanced Spark Training
This 2-day training is dedicated to Big Data engineers and data scientists who are already familiar with the basic concepts of Apache Spark and have hands-on experience implementing and running Spark applications.Real-Time Stream Processing
This two-day course teaches data engineers how to process unbounded streams of data in real-time using popular open-source frameworks.Analytics engineering with Snowflake and dbt
This 2-day training is dedicated to data analysts, analytics engineers & data engineers, who are interested in learning how to build and deploy Snowflake data transformation workflows faster than ever before.Mastering ML/MLOps and AI-powered Data Applications in the Snowflake Data Cloud
This 2-day training is dedicated to data engineers, data scientists, or a tech enthusiasts. This workshop will provide hands-on experience and real-world insights into architecting data applications on the Snowflake Data Cloud.Modern Data Pipelines with DBT
In this one day workshop, you will learn how to create modern data transformation pipelines managed by DBT. Discover how you can improve your pipelines’ quality and workflow of your data team by introducing a tool aimed to standardize the way you incorporate good practices within the data team.Real-time analytics with Snowflake and dbt
This 2-day training is dedicated to data analysts, analytics engineers & data engineers, who are interested in learning how to build and deploy real-time Snowlake data pipelines.
Contact us
Interested in our solutions?
Contact us!
Contact us!
Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.
What did you find most impressive about GetInData?