Getting data into your Microsoft Fabric Lakehouse using Load to Tables

preview_player
Показать описание
The future of data analytics is here, and it's called Lakehouse! Microsoft Fabric Lakehouse is revolutionizing the way we manage and analyze data.

📌 In this episode, we'll explore:

📊 Load Single File into a New or Existing Table

📊 Learn which file types are supported in the Load to Tables feature.

📊 Table and Column Name Validation and Rules

📊 Walk through the process of selecting a file in the Lakehouse Files section and loading it into a new Delta table.

🎙 Meet the Speakers:

👤 Guest from Microsoft Fabric Product Group: Daniel Coelho, Principal Product Manager



👤 Host: Estera Kot: Senior Product Manager at Microsoft

#microsoft #microsoftfabric
Рекомендации по теме
Комментарии
Автор

This is really awesome! For some files this is a great method. Use it when it works kinda thing.

knuckleheadmcspazatron
Автор

Ther are many video options about loading data into a Lakehouse. How do we manage\edit the data once it is in there?

TomFrost
Автор

Nice. if I save the files from source into the Lakehouse File as csv and Json then will it save it has delta parquet if not then why we are saying data is saved in one lake as delta parquet

sanishthomas
Автор

hi, thanks for video. It was very widely explained for use cases and possibilities. Especially loading from folder comes in handy.

There is one thing that I dont understand : why in case we dont see preview correctly should we drop and load tables again. How can you be sure that it will not repeat? I would rather see the reasons why table didnt not load correctly to understand where is the problem. What do you think?

lukaszk
Автор

Very helpful video.
Does/will load to table support incremental load from lakehouse files using merge?
i.e., if lakehouse files that contains inserts, updates, and deletes is copied into lakehouse files each file needs to be merged (in chronological order) into the lakehouse table so that the correct final state is attained.
Also, is there a way to retain history in the lakehouse table with the ability to time travel (a popular feature of other table offerings like iceberg).
Thanks in advance for any pointers/suggestions.

billkuhn
Автор

You import the CSV but from a folder in the computer, but where is the connection to the file that originated the CSV? I see that the CSV in Fabric is static is not being updated.

ricardoabella
Автор

that automation thing can be very handy

XEQUTE
Автор

We have an older AX09 database that is read only. It has about 1000 tables. There's absolutely no easy way to copy those tables into a Lakehouse, even with pipelines. For one, the copy tool doesn't support schema. So dbo.inventtrans becomes dbo.dbo_inventtrans in the target. Furthermore you basically have to export one table at a time, because when selecting multiple, schema mappings are not being generated. Then you add to that the strict case sensitive queries. From Azure SQL to Azure Serverless to Fabric Warehouse in just a span of 4 years. It's too much to ask companies that have lots of data and integrations going on to make the switch every time.

DanielWillen