Spark [Hash Partition] Explained

preview_player
Показать описание
Spark [Hash Partition] Explained {தமிழ்}



Video Playlist
-----------------------

YouTube channel link

Website
Technology in Tamil & English

#bigdata #hadoop #spark #apachehadoop #whatisbigdata #bigdataintroduction #bigdataonline #bigdataintamil #bigdatatamil #hadoop #hadoopframework #hive #hbase #sqoop #mapreduce #hdfs #hadoopecosystem #spark #bigdata #apachespark #hadoop #sparkmemoryconfig #executormemory #drivermemory #sparkcores #sparkexecutors #sparkmemory #sparkdeploy #sparksubmit #sparkyarn #sparklense #sparkprofiling #sparkqubole #sparkhashpartition #hashpartition
Рекомендации по теме
Комментарии
Автор

Thank you sir for the brief explanation.

yaswanthgenji
Автор

How the key is decided by spark or map reduce?? How it knows to pick a key?

Suppose i am reading a file of 1gb, then how key is decided?

hemapriyaprakash
Автор

The concept of Hive bucketing is based on the hashing technique. Is it same like Hash partition?
Please explain ....

rajkiran
Автор

How do we say the input task and input block is 2?

prempalani
Автор

bro tamil and english video title changed

needhidevansenthilkumar