Why Delta Lake is BETTER than Parquet

preview_player
Показать описание
Demo as part of - Degrading Performance? You Might be Suffering From the Small Files Syndrome - Presentation from Data & AI Summit 2021, formerly Spark + AI Summit by @AdiPolak

No matter if your data pipelines are handling real-time event-driven streams, near-real-time streams, or batch processing jobs. When you work with a massive amount of data made out of small files, specifically parquet, your system performance will degrade.

A small file is one that is significantly smaller than the storage block size. Yes, even with object stores such as Amazon S3, Azure Blob, etc., there is minimum block size. Having a significantly smaller object file can result in wasted space on the disk since the storage is optimized to support fast read and write for minimal block size.

To understand why this happens, you need first to understand how cloud storage works with the Apache Spark engine. In this session, you will learn about Parquet, the Storage API calls, how they work together, why small files are a problem, and how you can leverage DeltaLake for a more straightforward, cleaner solution.
Рекомендации по теме
Комментарии
Автор

Awesome video!!! Really rich information

Plsmferesi