Big Data Engineer Mock Interview | AWS | Kafka Streaming | SQL | PySpark Optimization #interview

preview_player
ะŸะพะบะฐะทะฐั‚ัŒ ะพะฟะธัะฐะฝะธะต

I have trained over 20,000+ professionals in the field of Data Engineering in the last 5 years.

๐–๐š๐ง๐ญ ๐ญ๐จ ๐Œ๐š๐ฌ๐ญ๐ž๐ซ ๐’๐๐‹? ๐‹๐ž๐š๐ซ๐ง ๐’๐๐‹ ๐ญ๐ก๐ž ๐ซ๐ข๐ ๐ก๐ญ ๐ฐ๐š๐ฒ ๐ญ๐ก๐ซ๐จ๐ฎ๐ ๐ก ๐ญ๐ก๐ž ๐ฆ๐จ๐ฌ๐ญ ๐ฌ๐จ๐ฎ๐ ๐ก๐ญ ๐š๐Ÿ๐ญ๐ž๐ซ ๐œ๐จ๐ฎ๐ซ๐ฌ๐ž - ๐’๐๐‹ ๐‚๐ก๐š๐ฆ๐ฉ๐ข๐จ๐ง๐ฌ ๐๐ซ๐จ๐ ๐ซ๐š๐ฆ!

"๐€ 8 ๐ฐ๐ž๐ž๐ค ๐๐ซ๐จ๐ ๐ซ๐š๐ฆ ๐๐ž๐ฌ๐ข๐ ๐ง๐ž๐ ๐ญ๐จ ๐ก๐ž๐ฅ๐ฉ ๐ฒ๐จ๐ฎ ๐œ๐ซ๐š๐œ๐ค ๐ญ๐ก๐ž ๐ข๐ง๐ญ๐ž๐ซ๐ฏ๐ข๐ž๐ฐ๐ฌ ๐จ๐Ÿ ๐ญ๐จ๐ฉ ๐ฉ๐ซ๐จ๐๐ฎ๐œ๐ญ ๐›๐š๐ฌ๐ž๐ ๐œ๐จ๐ฆ๐ฉ๐š๐ง๐ข๐ž๐ฌ ๐›๐ฒ ๐๐ž๐ฏ๐ž๐ฅ๐จ๐ฉ๐ข๐ง๐  ๐š ๐ญ๐ก๐จ๐ฎ๐ ๐ก๐ญ ๐ฉ๐ซ๐จ๐œ๐ž๐ฌ๐ฌ ๐š๐ง๐ ๐š๐ง ๐š๐ฉ๐ฉ๐ซ๐จ๐š๐œ๐ก ๐ญ๐จ ๐ฌ๐จ๐ฅ๐ฏ๐ž ๐š๐ง ๐ฎ๐ง๐ฌ๐ž๐ž๐ง ๐๐ซ๐จ๐›๐ฅ๐ž๐ฆ."

๐‡๐ž๐ซ๐ž ๐ข๐ฌ ๐ก๐จ๐ฐ ๐ฒ๐จ๐ฎ ๐œ๐š๐ง ๐ซ๐ž๐ ๐ข๐ฌ๐ญ๐ž๐ซ ๐Ÿ๐จ๐ซ ๐ญ๐ก๐ž ๐๐ซ๐จ๐ ๐ซ๐š๐ฆ -

30 INTERVIEWS IN 30 DAYS- BIG DATA INTERVIEW SERIES

This mock interview series is launched as a community initiative under Data Engineers Club aimed at aiding the community's growth and development

Link of Free SQL & Python series developed by me are given below -

Don't miss out - Subscribe to the channel for more such informative interviews and unlock the secrets to success in this thriving field!

Social Media Links :

Timestamp of Questions Discussed
00:00 Introduction
03:37 Notable use-case experience
06:03 Handling duplicates in relational databases
07:28 Scenario-based question
12:40 Coding questions
21:58 Difference between UNION and UNION ALL
22:36 Coding question
28:37 Understanding Spark
29:03 Experience with Spark optimizations
32:17 Difference between partitioning and bucketing
33:40 Using Amazon S3, Glue, and Lambda in data projects
35:42 Tools for job orchestration
37:05 Scenario based question on shell scripting
39:54 SCD type 2 explanation
40:51 Scenario-based question

Tags
#mockinterview #bigdata #career #dataengineering #data #datascience #dataanalysis #productbasedcompanies #interviewquestions #apachespark #google #interview #faang #companies #amazon #walmart #flipkart #microsoft #azure #databricks #jobs
ะ ะตะบะพะผะตะฝะดะฐั†ะธะธ ะฟะพ ั‚ะตะผะต
ะšะพะผะผะตะฝั‚ะฐั€ะธะธ
ะะฒั‚ะพั€

very insightful on sql, aws, data modeling concepts & applications of those concepts, helps to recall & understand better the concepts learnt in big data master course & sql leetcode playlist :)

arunsundar
ะะฒั‚ะพั€

In the case of creating a primary key in case unavailable, we can select any attribute and check if that attribute has 1 to 1 relationship with other composite values (in excel using a pivot table, check distinct values) and then use sha2 or md5 in adf to form the surrogate key. Correct me if I'm wrong

ankandatta
ะะฒั‚ะพั€

@sumitmittal07 The SQL aggregate question in which we need to calculate cumulative profit won't use ROWS Between as that will be used for rolling profit between a range, instead it should be simply: CUMULATIVE_PROFIT = SUM(profit) OVER(ORDER BY transaction_id, transaction_date). Let me know if I understood the question correctly or not.

Also, in the partitioning and bucketing question interviewee have explained vice-versa.

sonuparmar
ะะฒั‚ะพั€

The interviewer looks like Tarun gill :) BTW nice interview.

akashprabhakar