How to Build a Cloud Data Platform Part 2 - ETL Processing

preview_player
Показать описание
In part 2 of this 4 part series you’ll learn how to create Delta Tables, what is Delta Lake Time Travel, and last but not least we explore how to perform an Upsert operation on Delta table. At the end of each session, you will be given redemption codes for additional free Databricks self-paced training and/or demo notebooks for hands-on practice.

ABOUT
Databricks provides a unified data analytics platform, powered by Apache Spark™, that accelerates innovation by unifying data science, engineering, and business.

Connect with us:
Рекомендации по теме
Комментарии
Автор

1:13:55 use “with cube” in group by not in the select clause.

SELECT device_id, name, avg(heartrate)
from <table>
group by device_id, name
with cube

thomsondcruz
Автор

This is super informative. There are SaaS type products that seem to abstract many of these ETL tasks away via UI for pipelining. Well, claim to. Or, at least provide a means of doing data transformations in a "low code / no code" way. I'm skeptical. I'll keep watching these videos and learning :)

bogoodski
Автор

1:13:55 under the aggregation section you can see Cube but it's in the Group by Clause not the Select clause

TheSQLPro
Автор

Great session just like the last one. Thank you!

TheSQLPro
Автор

1:13:55 select device_id, name, avg(heartrate) from table group by device_id, name with cube

thomsondcruz
Автор

Amazing, Thanks. Lots of things are clarified here.

namanbhayani
Автор

Where can I find Notebook ? Please share the link of the Notebook used in Demo

zeeshanmirza
Автор

The Azure Data Factory example at 1:15:17 did not have anything to do with Databricks Delta.

chrispollock