filmov
tv
Streaming data to HDFS using Apache Flume | Big Data Hadoop Tutorial

Показать описание
This lecture is all about streaming data to HDFS using Apache Flume where we have set up the Flume Agent to listen to a directory in HDP Sandbox using SpoolDir as a source and streamed to HDFS directory in real time.
Commands for this lecture:
cd /usr/hdp/current/flume-server
mkdir spool
On Ambari, create directory under user/maria_dev/flume
Go to Ambari and verify
In the previous lecture we have seen publishing logs data using Apache Flume where we have streamed some raw data using Telnet as a source and published it to logger sink in real time using Apache Flume architecture. You should be knowing some of the basic concepts of Apache Fume like what is sink, channel, source and how it works under the hood before heading for this tutorial.
Commands for this lecture:
cd /usr/hdp/current/flume-server
yum -y install telnet
----------------------------------------------------------------------------------------------------------------------
HDP Sandbox Installation links:
-------------------------------------------------------------------------------------------------------------
Also check out similar informative videos in the field of cloud computing:
Audience
This tutorial is made for professionals who are willing to learn the basics of Big Data Analytics using Hadoop Ecosystem and become a Hadoop Developer. Software Professionals, Analytics Professionals, and ETL developers are the key beneficiaries of this course.
Prerequisites
Before you start proceeding with this course, I am assuming that you have some basic knowledge to Core Java, database concepts, and any of the Linux operating system flavors.
---------------------------------------------------------------------------------------------------------------------------
Check out our full course topic wise playlist on some of the most popular technologies:
SQL Full Course Playlist-
PYTHON Full Course Playlist-
Data Warehouse Playlist-
Unix Shell Scripting Full Course Playlist-
-----------------------------------------------------------------------------------------------------------------------Don't forget to like and follow us on our social media accounts:
Facebook-
Instagram-
Twitter-
Tumblr-
-------------------------------------------------------------------------------------------------------------------------
Channel Description-
AmpCode provides you e-learning platform with a mission of making education accessible to every student. AmpCode will provide you tutorials, full courses of some of the best technologies in the world today.By subscribing to this channel, you will never miss out on high quality videos on trending topics in the areas of Big Data & Hadoop, DevOps, Machine Learning, Artificial Intelligence, Angular, Data Science, Apache Spark, Python, Selenium, Tableau, AWS , Digital Marketing and many more.
#bigdata #datascience #dataanalytics #datascientist #hadoop #hdfs #hdp #mongodb #cassandra #hbase #nosqldatabase #nosql #pyspark #spark #presto #hadooptutorial #hadooptraining
Commands for this lecture:
cd /usr/hdp/current/flume-server
mkdir spool
On Ambari, create directory under user/maria_dev/flume
Go to Ambari and verify
In the previous lecture we have seen publishing logs data using Apache Flume where we have streamed some raw data using Telnet as a source and published it to logger sink in real time using Apache Flume architecture. You should be knowing some of the basic concepts of Apache Fume like what is sink, channel, source and how it works under the hood before heading for this tutorial.
Commands for this lecture:
cd /usr/hdp/current/flume-server
yum -y install telnet
----------------------------------------------------------------------------------------------------------------------
HDP Sandbox Installation links:
-------------------------------------------------------------------------------------------------------------
Also check out similar informative videos in the field of cloud computing:
Audience
This tutorial is made for professionals who are willing to learn the basics of Big Data Analytics using Hadoop Ecosystem and become a Hadoop Developer. Software Professionals, Analytics Professionals, and ETL developers are the key beneficiaries of this course.
Prerequisites
Before you start proceeding with this course, I am assuming that you have some basic knowledge to Core Java, database concepts, and any of the Linux operating system flavors.
---------------------------------------------------------------------------------------------------------------------------
Check out our full course topic wise playlist on some of the most popular technologies:
SQL Full Course Playlist-
PYTHON Full Course Playlist-
Data Warehouse Playlist-
Unix Shell Scripting Full Course Playlist-
-----------------------------------------------------------------------------------------------------------------------Don't forget to like and follow us on our social media accounts:
Facebook-
Instagram-
Twitter-
Tumblr-
-------------------------------------------------------------------------------------------------------------------------
Channel Description-
AmpCode provides you e-learning platform with a mission of making education accessible to every student. AmpCode will provide you tutorials, full courses of some of the best technologies in the world today.By subscribing to this channel, you will never miss out on high quality videos on trending topics in the areas of Big Data & Hadoop, DevOps, Machine Learning, Artificial Intelligence, Angular, Data Science, Apache Spark, Python, Selenium, Tableau, AWS , Digital Marketing and many more.
#bigdata #datascience #dataanalytics #datascientist #hadoop #hdfs #hdp #mongodb #cassandra #hbase #nosqldatabase #nosql #pyspark #spark #presto #hadooptutorial #hadooptraining
Комментарии