Building our first PySpark Application using Jupyter Notebook! | PySpark Tutorial

preview_player
Показать описание
In this lecture, we're going to build our first PySpark Application using Jupyter Notebook where we will create and run simple Apache Spark script written in Python. Below is the data file and GitHUB link to our PySpark code.

Anaconda Distributions Installation link:

----------------------------------------------------------------------------------------------------------------------

Apache Spark Installation links:

Environment Variables:

HADOOP_HOME- C:\hadoop
JAVA_HOME- C:\java\jdk
SPARK_HOME- C:\spark\spark-3.3.1-bin-hadoop2
PYTHONPATH- %SPARK_HOME%\python;%SPARK_HOME%\python\lib\py4j-0.10.9-src;%PYTHONPATH%

Required Paths:

%SPARK_HOME%\bin
%HADOOP_HOME%\bin
%JAVA_HOME%\bin

Also check out our full Apache Hadoop course:

----------------------------------------------------------------------------------------------------------------------
Apache Spark Installation links:

-------------------------------------------------------------------------------------------------------------

Also check out similar informative videos in the field of cloud computing:

Audience

This tutorial has been prepared for professionals/students aspiring to learn deep knowledge of Big Data Analytics using Apache Spark and become a Spark Developer and Data Engineer roles. In addition, it would be useful for Analytics Professionals and ETL developers as well.

Prerequisites

Before proceeding with this full course, it is good to have prior exposure to Python programming, database concepts, and any of the Linux operating system flavors.

-----------------------------------------------------------------------------------------------------------------------

Check out our full course topic wise playlist on some of the most popular technologies:

SQL Full Course Playlist-

PYTHON Full Course Playlist-

Data Warehouse Playlist-

Unix Shell Scripting Full Course Playlist-

-----------------------------------------------------------------------------------------------------------------------Don't forget to like and follow us on our social media accounts:

Facebook-

Instagram-

Twitter-

Tumblr-

-----------------------------------------------------------------------------------------------------------------------

Channel Description-

AmpCode provides you e-learning platform with a mission of making education accessible to every student. AmpCode will provide you tutorials, full courses of some of the best technologies in the world today. By subscribing to this channel, you will never miss out on high quality videos on trending topics in the areas of Big Data & Hadoop, DevOps, Machine Learning, Artificial Intelligence, Angular, Data Science, Apache Spark, Python, Selenium, Tableau, AWS , Digital Marketing and many more.

#pyspark #bigdata #datascience #dataanalytics #datascientist #spark #dataengineering #apachespark
Рекомендации по теме
Комментарии
Автор


NameError Traceback (most recent call last)
Cell In[14], line 2
1 data_2=data.select("industry", "value").\
----> 2 filter(Col("value")>1000).\
3 orderBy(desc("value"))

NameError: name 'Col' is not defined

mahendranaidu
Автор

RuntimeError: Java gateway process exited before sending its port number -- how to solve?

avinash
Автор

How to run spark application on cluster ??

ashishveer
Автор

I can not download the csv file. Can you please check why or give website link so that we can directly download from that website.

jankipatel
Автор

Very useful for me I have databricks in my job but I want to practice my queries in personal laptop thanks to you I know how

riomorder
Автор

My Spark is considering all the values of the header as String.
root
|-- description: string (nullable = true)
|-- industry: string (nullable = true)
|-- level: string (nullable = true)
|-- size: string (nullable = true)
|-- line_code: string (nullable = true)
|-- value: string (nullable = true

I have written same code as you have done in the video.
#Creating DataFrame

# as our dataset already had header, therefore, we provided inferSchema as True and header as true

data = spark.read.format('csv').\
option('inferScheme', 'true').\
option('header', 'true').\
option('path', 'operations_management.csv').\
load()

Can anyone please help?

nayanagrawal
Автор

very useful as beginner and clear, cut explanation

sriponnirealestates
Автор

I have tried several SPARK series and never get very far. I have gone through all of yours in a row so far and think you do a really good job. Thanks for putting this together, cheers!

patrickwheeler
Автор

Excellent Lecture Sir, ,. Truly Adorable ...

sidindian
Автор

its throwing me errors from everywhere claiming col and desc are not recognized names. How damn can you make your app work without issue ??

flosrv
Автор

Wonderful, i'm not english native, sooo the way i´ve been understand all sessions, top top ! U are the greatest thanks for sharing with us. From Angola

albertopedro
Автор

i like the way you are explaining the code .

sachindubey