Spark Java Tutorial | Spark Java Course | Spark Java Training | Intellipaat

preview_player
Показать описание


#sparkjavatutorial #sparkjavacourse #sparkjavatraining apachesparkforjavadevelopers #intellipaat

If you’ve enjoyed this apache Spark Java Video, Like us and Subscribe to our channel for more similar informative Spark tutorials.
Got any questions about apache spark? Ask us in the comment section below.
----------------------------
Intellipaat Edge
1. 24*7 Life time Access & Support
2. Flexible Class Schedule
3. Job Assistance
4. Mentors with +14 yrs
5. Industry Oriented Course ware
6. Life time free Course Upgrade
------------------------------

Рекомендации по теме
Комментарии
Автор

True. Spark can be considered as a tool in the realm of AI/ML/DS, especially for tasks involving distributed data processing, data transformation, and large-scale analytics. It's commonly used in conjunction with other AI/ML/DS frameworks and libraries to build end-to-end data pipelines and perform computations on large datasets efficiently.

ChatGPT 🎉

Ramkumar-ujfo
Автор

Where did this spark master and worker nodes come into the picture for running this app?

minnuhoneyvolgs
Автор

Should we need to add depends in pom.xml ?
If so share me plz

pramodabr
Автор

Whenever i tried to edit path in system variable, instead of the "edit environment variable " box, "change system variable box is coming. i cant add sparks path in "Path". Please help

samrat
Автор

Can you plz zoom in when you type the cmds and write the java code. Unable to view what u type in

uzmafarheen
Автор

hello there ? am using JDK version 17 in ubuntu, and have set up the apace sparkmaster and the worker using docker, and everythng is working okey .the problem comes when i try to integrate a spring boot applciation with the saprk runnign locally, and this is the error that i get : Caused by: Unable to make private java.nio.DirectByteBuffer(long, int) accessible: module java.base does not "opens java.nio" to unnamed module @3745e5c6.Kindly help me resolve this .

kelvinkirwa
Автор

Very good and informative session.
Can we read parquet file using Java, if yes could you please provide me any references?

shreep
Автор

In my spark version 2.4.3 job after all my transformations, computations and joins I am writing my final dataframe to s3 in parquet format
But irrespective of my cores count my job is taking fixed amount for completing save action

For distinct cores count-8, 16, 24 my write action timing is fixed to 8 minutes
Due to this my solution is not becoming scalable
How should I make my solution scalable so that my overall job execution time becomes proportional to cores used

raksadi
Автор

Are you going to write program or talk talk talk talk talk talk talk talk talk

chessmaster
Автор

Listen to me. I an going to show you something. What? Listen to me I am going to show you something? What?

chessmaster
visit shbcf.ru