Machine Learning Operations Training (MLOps)
This four-day course will teach you how to operationalize Machine Learning models using popular open-source tools, like Kedro and Kubeflow, and deploy it using cloud computing.
During the course we will simulate real-world end-to-end scenarios – building a Machine Learning pipeline to train a model and deploy it on a Kubeflow environment. We’ll walk through the practical use cases of MLOps for creating reproducible, scalable and modular data science code. Next, we’ll propose a solution for running pipelines on a cloud (GCP, AWS or Azure), leveraging managed and serverless services. All exercises will be performed using either a local docker environment, or cloud account (GCP, AWS or Azure).
The scope of the course can be extended or customized upon request to cover specific machine learning topics or managed cloud solutions.
After the training participants will get:
- Practical knowledge of building Machine Learning pipelines using Kedro
- Hands-on experience with building Machine Learning platform with Kubeflow Pipelines
- Tips on real world applications and best practices
Machine Learning and ML Ops fundamentals
Introduction to Machine Learning Operations (MLOps)
Introduction and key concepts
Challenges of deploying and maintaining Machine Learning models in production
The Machine Learning model lifecycle
Structuring ML project with Kedro
Kedro - a framework to structure your ML pipeline
Create reproducible, maintainable and modular data science code
Build your Machine Learning pipeline
Developing and orchestrating ML pipelines with Kubeflow
Kubeflow and Kubeflow Pipelines
Introduction and key concepts
Example of Kubeflow Pipelines (managed) and serverless pipeline deployments (Vertex AI, Sagemaker)
Building MLOps infrastructure
Building infrastructure for your Machine Learning platform
Overview of MLOps Frameworks landscape, and reference architectures
Summary and wrap-up
Completed in half the estimated time and with a fivefold improvement on data collection goals, the robust product has exponentially increased processing capabilities. GetInData’s in-depth engagement, reliability, and broad industry knowledge enabled seamless project execution and implementation.
GetInData had been supporting us in building production Big Data infrastructure and implementing real-time applications that process large streams of data. In light of our successful cooperation with GetInData, their unique experience and the quality of work delivered, we recommend the company as a Big Data vendor.
GetInData delivered a robust mechanism that met our requirements. Their involvement allowed us to add a feature to our product, despite not having the required developer capacity in-house.
Their consistent communication and responsiveness enabled GetInData to drive the project forward. They possess comprehensive knowledge of the relevant technologies and have an intuitive understanding of business needs and requirements. Customers can expect a partner that is open to feedback.
We sincerely recommend GetInData as a Big Data training provider! The trainer is a very experienced practitioner and he gave us a lot of tips regarding production deployments, possible issues as well as good practices that are invaluable for a Hadoop administrator.
The engineers and administrators at GetInData are world-class experts. They have proven experience in many open-source technologies such as Hadoop, Spark, Kafka and Flink for implementing batch and real-time pipelines.
Other Big Data Training
Hadoop Administrator TrainingThis four-day course provides the practical and theoretical knowledge necessary to operate a Hadoop cluster. We put great emphasis on practical hands-on exercises that aim to prepare participants to work as effective Hadoop administrators.
Advanced Spark TrainingThis 2-day training is dedicated to Big Data engineers and data scientists who are already familiar with the basic concepts of Apache Spark and have hands-on experience implementing and running Spark applications.
Data Analyst TrainingThis four-day course teaches Data Analysts how to analyse massive amounts of data available in a Hadoop YARN cluster.
Real-Time Stream ProcessingThis two-day course teaches data engineers how to process unbounded streams of data in real-time using popular open-source frameworks.
Modern Data Pipelines with DBTIn this one day workshop, you will learn how to create modern data transformation pipelines managed by DBT. Discover how you can improve your pipelines’ quality and workflow of your data team by introducing a tool aimed to standardize the way you incorporate good practices within the data team.
Interested in our solutions?
Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.