Data Streaming and Real Time Data Processing Training Course
Course Overview
This course offers a practical and structured entry point into constructing real time data streaming systems. It explores essential concepts, architectural patterns, and industry-standard tools required to process continuous data at scale. Participants will acquire the skills to design, implement, and optimise streaming pipelines using contemporary frameworks. The curriculum advances from foundational theory to hands-on application, empowering learners to confidently develop production-ready real time solutions.
Training Format
• Instructor-led sessions with guided explanations
• Concept walkthroughs enriched with real-world examples
• Hands-on demonstrations and coding exercises
• Progressive labs aligned with daily topics
• Interactive discussions and Q&A sessions
Course Objectives
• Grasp the concepts of real time data streaming and system architecture
• Differentiate between batch and streaming data processing models
• Design scalable and fault-tolerant streaming pipelines
• Work with distributed streaming tools and frameworks
• Apply event time processing, windowing, and stateful operations
• Build and optimise real time data solutions tailored to business use cases
This course is available as onsite live training in Malaysia or online live training.Course Outline
Course Outline Day 1
• Introduction to data streaming concepts
• Batch vs real time processing fundamentals
• Event driven architecture basics
• Common use cases in industry
• Overview of streaming ecosystem
Day 2
• Streaming architecture design patterns
• Fundamentals of distributed messaging systems
• Producers and consumers
• Topics, partitions, and data flow
• Data ingestion strategies
Day 3
• Stream processing concepts and frameworks
• Event time vs processing time
• Windowing techniques and use cases
• Stateful stream processing
• Fault tolerance and checkpointing basics
Day 4
• Data transformation in streaming pipelines
• ETL and ELT in real time systems
• Schema management and evolution
• Stream joins and enrichment
• Introduction to cloud based streaming services
Day 5
• Monitoring and observability in streaming systems
• Security and access control basics
• Performance tuning and optimization
• End to end pipeline design review
• Real world use cases such as fraud detection and IoT processing
Open Training Courses require 5+ participants.
Data Streaming and Real Time Data Processing Training Course - Booking
Data Streaming and Real Time Data Processing Training Course - Enquiry
Data Streaming and Real Time Data Processing - Consultancy Enquiry
Testimonials (1)
Hands on exercises. Class should have been 5 days, but the 3 days helped to clear up a lot of questions that I had from working with NiFi already
James - BHG Financial
Course - Apache NiFi for Administrators
Upcoming Courses
Related Courses
Administrator Training for Apache Hadoop
35 HoursTarget Audience:
This course is designed for IT professionals seeking robust solutions for storing and processing extensive datasets within distributed system environments.
Course Objective:
To impart in-depth knowledge regarding the administration of Hadoop clusters.
Big Data Analytics with Google Colab and Apache Spark
14 HoursThis instructor-led live training in Malaysia (online or onsite) is aimed at intermediate-level data scientists and engineers who wish to use Google Colab and Apache Spark for big data processing and analytics.
By the end of this training, participants will be able to:
- Set up a big data environment using Google Colab and Spark.
- Process and analyze large datasets efficiently with Apache Spark.
- Visualize big data in a collaborative environment.
- Integrate Apache Spark with cloud-based tools.
Big Data Analytics in Health
21 HoursBig data analytics refers to the process of examining large volumes of diverse datasets to uncover correlations, hidden patterns, and other valuable insights.
The healthcare sector generates vast amounts of complex, heterogeneous medical and clinical data. Applying big data analytics to this information holds significant potential for deriving insights that can enhance the delivery of healthcare. However, the sheer scale of these datasets presents considerable challenges in terms of analysis and practical application within clinical environments.
In this instructor-led, live remote training, participants will learn how to perform big data analytics in health through a series of hands-on live-lab exercises.
By the end of this training, participants will be able to:
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to manage medical data
- Study big data systems and algorithms in the context of health applications
Audience
- Developers
- Data Scientists
Format of the Course
- A mix of lectures, discussions, exercises, and extensive hands-on practice.
Note
- To request customized training for this course, please contact us to arrange.
Hadoop For Administrators
21 Hours
Apache Hadoop stands as the leading framework for processing Big Data across server clusters. Over the course of three days (or four, if an optional track is selected), participants will gain insights into the business advantages and use cases associated with Hadoop and its ecosystem. The curriculum covers planning for cluster deployment and expansion, along with practical skills in installing, maintaining, monitoring, troubleshooting, and optimising Hadoop. Attendees will also engage in bulk data loading exercises, explore various Hadoop distributions, and practise managing Hadoop ecosystem tools. The course concludes with a discussion on securing the cluster using Kerberos.
“The materials were meticulously prepared and comprehensively covered. The lab sessions were exceptionally helpful and well-organised.”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Hadoop administrators
Format
The course combines lectures with hands-on labs, maintaining an approximate split of 60% lectures and 40% practical lab work.
Hadoop for Developers (4 days)
28 HoursApache Hadoop is the leading framework for processing Big Data across server clusters. This course introduces developers to the key components of the Hadoop ecosystem, including HDFS, MapReduce, Pig, Hive, and HBase.
Advanced Hadoop for Developers
21 HoursApache Hadoop stands as one of the most widely adopted frameworks for processing Big Data across server clusters. This course provides an in-depth exploration of data management within HDFS, alongside advanced techniques for Pig, Hive, and HBase. These sophisticated programming methods are designed to add significant value for experienced Hadoop developers.
Audience: developers
Duration: three days
Format: lectures (50%) and hands-on labs (50%).
Hadoop Administration on MapR
28 HoursAudience:
This course aims to demystify big data and Hadoop technology, demonstrating that it is accessible and straightforward to understand.
Hadoop and Spark for Administrators
35 HoursThis instructor-led live training in Malaysia (online or onsite) is designed for system administrators seeking to learn how to set up, deploy, and manage Hadoop clusters within their organisations.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecosystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use the Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as the storage engine for on-premise Spark deployments.
- Configure Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems like Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring, and securing an Apache Hadoop cluster.
HBase for Developers
21 HoursThis course provides an introduction to HBase, a NoSQL database built on top of Hadoop. It is designed for developers who intend to build applications using HBase, as well as administrators responsible for managing HBase clusters.
The curriculum guides developers through HBase architecture, data modeling, and application development. Topics also cover utilizing MapReduce with HBase and address key administrative areas, with a focus on performance optimization. The course is highly practical, featuring numerous lab exercises.
Duration : 3 days
Audience : Developers & Administrators
Apache NiFi for Administrators
21 HoursApache NiFi is an open-source, flow-based data integration and event-processing platform. It enables automated, real-time data routing, transformation, and system mediation between disparate systems, with a web-based UI and fine-grained control.
This instructor-led, live training (onsite or remote) is aimed at intermediate-level administrators and engineers who wish to deploy, manage, secure, and optimize NiFi dataflows in production environments.
By the end of this training, participants will be able to:
- Install, configure, and maintain Apache NiFi clusters.
- Design and manage dataflows from varied sources and sinks.
- Implement flow automation, routing, and transformation logic.
- Optimize performance, monitor operations, and troubleshoot issues.
Format of the Course
- Interactive lecture with real-world architecture discussion.
- Hands-on labs: building, deploying, and managing flows.
- Scenario-based exercises in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Apache NiFi for Developers
7 HoursIn this instructor-led live training in Malaysia, participants will learn the fundamentals of flow-based programming while developing various demo extensions, components, and processors using Apache NiFi.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
PySpark and Machine Learning
21 HoursThis training offers a hands-on introduction to developing scalable data processing and Machine Learning workflows using PySpark. Participants will discover how Apache Spark functions within contemporary Big Data ecosystems and learn to process vast datasets efficiently by applying distributed computing principles.
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led live training in Malaysia, participants will learn how to use Python and Spark together to analyze big data while engaging in hands-on exercises.
By the end of this training, participants will be able to:
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
Python, Spark, and Hadoop for Big Data
21 HoursThis instructor-led, live training in Malaysia (online or on-site) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyse, and transform large and complex datasets.
By the end of this training, participants will be able to:
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MLlib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
Stratio: Rocket and Intelligence Modules with PySpark
14 HoursStratio is a unified, data-centric platform that consolidates big data management, artificial intelligence, and governance. Its Rocket and Intelligence modules facilitate rapid data exploration, transformation, and advanced analytics tailored for enterprise settings.
This instructor-led live training, available online or on-site, is designed for intermediate data professionals looking to master the Rocket and Intelligence modules within Stratio using PySpark. Key areas of focus include looping structures, user-defined functions (UDFs), and implementing advanced data logic.
Upon completing this training, participants will be capable of:
- Effectively navigating and utilizing the Stratio platform through its Rocket and Intelligence modules.
- Applying PySpark for data ingestion, transformation, and analytical tasks.
- Employing loops and conditional logic to orchestrate data workflows and execute feature engineering.
- Developing and managing user-defined functions (UDFs) to enable reusable data operations within PySpark.
Course Format
- Engaging lectures accompanied by interactive discussions.
- Extensive exercises and practical sessions.
- Hands-on implementation within a live laboratory environment.
Customization Options
- To arrange customized training for this course, please get in touch with us to discuss your specific requirements.