Three data pipeline use cases to make your life easier (Microsoft Fabric)

preview_player
Показать описание

Data pipelines allow you to automate and schedule data processing tasks in Microsoft Fabric. In this video we walk through three common use cases: getting data from Azure, scheduling a stored procedure, and scheduling a Synapse Data Engineering notebook run.

0:00 Intro
1:08 Pipeline 1: Azure Blob to Data Warehouse
4:26 Pipeline 2: Trigger Stored Procedure
8:57 Pipeline 3: Trigger Synapse notebooks
10:31 Outro
Рекомендации по теме
Комментарии
Автор

Nice one mate, need to see your other videos

shafa
Автор

great videos and thanks for sharing. I was searching for a pipeline that reads data from a text file that has a header and footer row that has to be excluded, also add column headers. Is that possible to achieve use pipeline copy activity. I'm able to achieve the same using Spark, but was looking for skip rows option

jayananair
Автор

Nice video, how do we append data while writing to a destination table using a data pipeline in a warehouse (I don't see any option like we have while writing to the table in Lakehouse e.g. "append" or "overwrite"), my source is rest API

AndyKabeer
Автор

Could you send the output of the notebook to the next step in the pipeline? Or just write the data somewhere and read it in another step?
thx

DanielWeikert
Автор

the data movement cost of the fabric is very high. Anyone having problems with this?

canaldovargas