Dynamic Databricks Workflows - Advancing Spark

preview_player
Показать описание
Last week saw the General Availability of dynamic functionality for Databricks Workflows, in the form of the parameterized ForEach activity, but what does that mean? And why should we care?

For a long time we've been using external orchestration tools whenever things had to be flexible, metadata driven or simply at a large scale - but with these changes we have a cheap, flexible way of achieving some pretty complex orchestration inside of Databricks itself!

In this video Simon looks at a quick pattern for running a medallion-style lake processing routine, all driven by some JSON metadata!

Рекомендации по теме
Комментарии
Автор

This a great feature. We started using it right away when it was available as a private preview for production jobs — that’s how great it is. I only have two core complaints:

1. It’s not possible to setup unlimited retries of child tasks — so it’s not good for launching multiple autoloader tasks that are going to run continuously.
2. As you are stepping through a previous or currently running workflow, you cannot go back to the parent job easily. It trips me up everytime.

kentmaxwell
Автор

Great Content. I hope you can have another vid showcasing the use of shared cluster. 😊

SheilaRuthFaller
Автор

Horray, Databricks Workflows does ForEach!
[Queue side-eye from ADF.]

pic
Автор

Interesting. We used ADF in our project for orchestration after finding out that dlt does not handle dynamic ingest processing. Will have a look at this. We are using a dedicated interactive cluster.

brads
Автор

Interesting…how well does it handle failure and restarts after failure…say the job fails in the second of the inputs…or the second of the process inside the input how does it handle it? Does repair run start a new run from the beginning ?? Does it fail immediately if if the concurrent job fails ?

rbharath
Автор

It's a really good feature and brings the prospect of removing tools like ADF. However, I tried running an ingest notebook using autoloader where I pass in a dictionary of values via a task value reference. When I tried doing it for a large number of tables I hit the 48 KiB limit. So, I will have to revert to using a threading function or work out how I can chunk up the data being passed to the child notebook and have another loop.

GraemeCash
Автор

Are scripts available in a repo? Using ForEach in ADF streamlines things brilliantly and can't wait to use it Databricks Workflows

jeanheichman
Автор

Thanks for the video! I’m stuck in something I can’t quite figure out. Maybe you’d be willing to help! How do you run sequential backfills in Databricks?

The concept is, you have a scd type 2 / cumulative table. And you need to backfill it to the same state that it currently is with some adjustment using the dependencies’ delta table versioning.

So with airflow and without delta tables, you’d have a data lake table that has a daily dump as a date partition within the same table. So say, using airflow and this snapshot style table, you would simply compare the last two snap shots and when you set it in airflow, you say “depends_on_past”. In this way, you would go back and on a daily basis, for each day, do your compare.

What I can’t figure out is an elegant way to do this in Databricks with a delta table. In particular because the delta table does not have a daily dump partition (I guess I could add it but trying to save space!)

The closest thing I can think, which seems awkward, is to do this for each loop and have a metadata task like you have, but to set the version dates I want to run off in a list.

So if you imagine a scd type 2 table I’m trying to backfill, you
1. set a date for starting sequential backfill,
2. get the previous update/insert timestamps through the scd type 2 table version history,
3. concaténate it with say a daily date for the dependency for previous versions of that delta table that are prior to the new table’s creation date.

Hopefully this makes sense, but then you can play the backfill over a deterministic set of dates, which would give you the same state back for the backfill.


Can you think of a more elegant way to do this with delta tables? It seems very complicated 😅

colinmanko
Автор

by putting subflows in one workflow, that means sharing cluster between workflows is possible? i mean the normal cluster not the serverless

StartDataLate