Microsoft Fabric Data Engineering [Full Course]

preview_player
Показать описание
Download your certificate of completion after you finish this course:
Student files

Get ready for an in-depth exploration of Microsoft Fabric's data engineering capabilities in our upcoming Learn with the Nerds!
Dive into the world of data factory pipelines and Spark notebooks, where we'll unravel the secrets behind designing efficient data pipelines and leveraging interactive notebooks for seamless data processing.
Whether you're a seasoned data engineer or just stepping into the realm of data architecture, this will be your guide to mastering the tools that make data collection, storage, and analysis a breeze.

0:00 - Introduction to Fabric in Azure Data Factory
12:45 - Importance of Compute in Fabric
30:15 - Setting Up and Configuring a Fabric Workspace
48:10 - Working with Data Lakes and External Tools
1:08:55 - Running Pipelines and Understanding Performance
1:12:01 - Utilizing Spark Notebooks for Big Data
1:20:37 - Integration Possibilities with Snowflake and On-Premises Data
1:26:44 - Wrapping Up

Next step on your journey:


Let's connect:

Pragmatic Works
7175 Hwy 17, Suite 2 Fleming Island, FL 32003
Phone: (904) 638-5743

#Fabric #PragmaticWorks #AustinLibal #Training #Microsoft #Tech #FreeConference #LearnWithTheNerds

**Any sales mentioned in the video may no longer be valid. Offers are subject to change with/without notice and are for a limited time only.
Рекомендации по теме
Комментарии
Автор

I paused in between once in a while to sip some water for you, Austin :D Great session! Cheers! :)

ArushiAcharya-xb
Автор

Good job, definitely enjoying, thanks a lot for sharing. blessings.

thetrue_fan_tv_nline
Автор

this is really commendable, thank you

oluwafemifelix
Автор

More questions from the live session that were not answered in the session.

Q) Any Microsoft certification available that cover Microsoft Fabric?

A) A new one has just been announced called DP-600: Implementing Analytics Solutions Using Microsoft Fabric! Stay tuned for content specifically around that in the future when it is generally avaliable!

austinlibal
Автор

A really fantastic session. Thank you. I have a question, and I don't know if it can be answered here. Is It possible to define a container to store my objects (Files, delta tables and others)? I understand that when you create your lakehouse, Microsoft provides you a specific location, but I would like to define my own area to store my objects. To be honest, I would like to use the concept of Medallion Architecture with landing zone, Bronze, Silver and Gold spaces. Thank you

woliveiras
Автор

I was wondering if we could take a shortcut to a folder in the gen 2 instead of using the full Data pipeline, given that our data doesn't need any transformation before it goes into the data lake? Also, how can I access data frames specified in the notebook for Power BI visuals or reports, or notebooks are merely another data pipeline to ingest (aggregated or transformed) data to the lake house?

juliekim
Автор

I really like the part where you show how to build a dynamic pipeline with parameter!!
I find that many tutorial only show basic function but not in depth or scalable solution which is what is required in real world environment.

maxgaming
Автор

Are you able to use autoloader within your Fabric notebook to connect to a OneLake folder?

brandonperks
Автор

My company has some data on Azure SQL that refreshes once every 24 hours.
I can write dataflow to bring selected rows into PowerBI.
However I want to create some reusable summary tables for which I think Python would be great.

Is it possible that in Lakehouse,
1- I create Dataflow that brings data from Azure SQL into a table
2 - Create scheduled pipeline that runs every 24 hours and runs the Dataflow that overwrite my table
3 - Use Pyspark or something similar to create the summary table
4 - Write that table to Lakehouse

I am not sure how step 3 and 4 will be automated on schedule? And I am not sure if above is possible at all?

Can you please help?

HarshJain-wv
Автор

Hi, I have another question regarding running pipelines. Can we run a pipeline based on file modification or new files that arrived in our file folder? I want to run my pipeline if we have a new file or update an existing one. In this case, it is essential to run the pipeline just for new or updated files. Thank you

woliveiras
Автор

Hi @austinlibal I dont know if you are still answering questions here. I have followed the video and doing every step with you. But when I came to the step were you go from get metadata -> filter -> to foreach1 and then invoke pipeline, when I click on run, I get this error:

Activity failed because an inner activity failed; Inner activity name: Invoke pipeline1, Error: Operation on target Copy data1 failed: Lakehouse table name should only contain letters, numbers, and underscores. The name must also be no more than 256 characters long.

I cant find any answers on how to solve this issue.

ahmedodawaa
Автор

I want to learn MS Fabric, am I suppose to learn SQL and PBI before get in your MS fabric just want to check as I am coming from non technical background. If possible can you gude me with Road map.

PrasadKatragadda-gh
Автор

You speak very fast. Not every one is fluent in english.

drisselfigha
welcome to shbcf.ru