How to Build a Metadata Driven Data Pipelines with Delta Live Tables

preview_player
Показать описание
In this session, you will learn how you can use metaprogramming to automate the creation and management of Delta Live Tables pipelines at scale. The goal is to make it easy to use DLT for large-scale migrations, and other use cases that require ingesting and managing hundreds or thousands of tables, using generic code components and configuration-driven pipelines that can be dynamically reused across different projects or datasets.

Talk by: Mojgan Mazouchi and Ravi Gawai

Рекомендации по теме
Комментарии
Автор

not clear how this process would handle what happens if your source query, for silver in this context, though might be more relevant to gold, uses something like an aggregate, which dlt streaming doesn't like and you may have to fully materialize a table instead of streaming

brads
Автор

Can we have a video for loading multiple tables using single pipeline

rishabhruwatia
Автор

would appreciate if databricks comes with a proper explanation ...both the tutor's explanation aren't clear

AnjaliH-wohm