filmov
tv
Peter Hoffmann - PySpark - Data processing in Python on top of Apache Spark.

Показать описание
Peter Hoffmann - PySpark - Data processing in Python on top of Apache Spark.
[EuroPython 2015]
[22 July 2015]
[Bilbao, Euskadi, Spain]
[Apache Spark][1] is a computational engine for large-scale data processing. It
is responsible for scheduling, distribution and monitoring applications which
consist of many computational task across many worker machines on a computing
cluster.
This Talk will give an overview of PySpark with a focus on Resilient
Distributed Datasets and the DataFrame API. While Spark Core itself is written
in Scala and runs on the JVM, PySpark exposes the Spark programming model to
Python. It defines an API for Resilient Distributed Datasets (RDDs). RDDs are a
distributed memory abstraction that lets programmers perform in-memory
computations on large clusters in a fault-tolerant manner. RDDs are immutable,
partitioned collections of objects. Transformations construct a new RDD from a
previous one. Actions compute a result based on an RDD. Multiple
computation steps
are expressed as directed acyclic graph (DAG). The DAG execution model is
a generalization of the Hadoop MapReduce computation model.
The Spark DataFrame API was introduced in Spark 1.3. DataFrames envolve Spark's
RDD model and are inspired by Pandas and R data frames. The API provides
simplified operators for filtering, aggregating, and projecting over large
datasets. The DataFrame API supports diffferent data sources like JSON
datasources, Parquet files, Hive tables and JDBC database connections.
Resources:
- [An Architecture for Fast and General Data Processing on Large
Clusters][2] Matei Zaharia
- [Spark][6] Cluster Computing with Working Sets - Matei Zaharia et al.
- [Resilient Distributed Datasets][5] A Fault-Tolerant Abstraction for
In-Memory Cluster Computing -Matei Zaharia et al.
- [Learning Spark][3] Lightning Fast Big Data Analysis - Oreilly
- [Advanced Analytics with Spark][4] Patterns for Learning from Data
at Scale - Oreilly
[EuroPython 2015]
[22 July 2015]
[Bilbao, Euskadi, Spain]
[Apache Spark][1] is a computational engine for large-scale data processing. It
is responsible for scheduling, distribution and monitoring applications which
consist of many computational task across many worker machines on a computing
cluster.
This Talk will give an overview of PySpark with a focus on Resilient
Distributed Datasets and the DataFrame API. While Spark Core itself is written
in Scala and runs on the JVM, PySpark exposes the Spark programming model to
Python. It defines an API for Resilient Distributed Datasets (RDDs). RDDs are a
distributed memory abstraction that lets programmers perform in-memory
computations on large clusters in a fault-tolerant manner. RDDs are immutable,
partitioned collections of objects. Transformations construct a new RDD from a
previous one. Actions compute a result based on an RDD. Multiple
computation steps
are expressed as directed acyclic graph (DAG). The DAG execution model is
a generalization of the Hadoop MapReduce computation model.
The Spark DataFrame API was introduced in Spark 1.3. DataFrames envolve Spark's
RDD model and are inspired by Pandas and R data frames. The API provides
simplified operators for filtering, aggregating, and projecting over large
datasets. The DataFrame API supports diffferent data sources like JSON
datasources, Parquet files, Hive tables and JDBC database connections.
Resources:
- [An Architecture for Fast and General Data Processing on Large
Clusters][2] Matei Zaharia
- [Spark][6] Cluster Computing with Working Sets - Matei Zaharia et al.
- [Resilient Distributed Datasets][5] A Fault-Tolerant Abstraction for
In-Memory Cluster Computing -Matei Zaharia et al.
- [Learning Spark][3] Lightning Fast Big Data Analysis - Oreilly
- [Advanced Analytics with Spark][4] Patterns for Learning from Data
at Scale - Oreilly
Комментарии