Handling corrupted records in a JSON | Spark SQL with Scala | Databricks

preview_player
Показать описание
Hi Friends, In this video we will see how to deal with corrupted JSON file in Spark SQL with Scala.
Рекомендации по теме
Комментарии
Автор

Thanks sravana for crystal clear explanation.. 👍

Jay-mni
Автор

Hi Sravana, Thanks for sharing very informative videos

ManishSharma-wypy
Автор

thanks for uploading valuable content please continue to upload more videos.

vignesan
Автор

Hi Sravana. I have used badrecords option but it is not working for me. I am still getting the corrupted record in the dataframe output. Can you please help me on this.

heenagirdher
Автор

can you tell how to parse corrupt record with multijson file.

kashishyadav
Автор

could you please help me to understand salting technique

krishnag
Автор

Hi Sravana,

In few of the interviews there are some questions related to the Hadoop environment details. Like call common deployment script for all the environment and other details. Basically queries related to the complete Hadoop ecosystem in real-time. Would it be possible for you to make a video on this ?

suprakashkhuntia
Автор

Hi sravana
These modes are handled by Scala with spark?
If yes can you please share the code thanks

arjun
Автор

Are these modes(FAILFAST & PERMISSIVE) available only on databricks or pyspark on-prem also has these?

justchill
Автор

can you also do one video on spark optimization techniques (I get theory on internet easily but need only that techniques which are in regular use in actual industry)

pranavdesai
Автор

One suggestion: Can you also share sample files used in video

pranavdesai
Автор

Hi bro Can you please provide ur insta id so that I'll clarify my doubts related to interview questions asked in spark??

sabesanj
visit shbcf.ru