Best Practices Using Azure SQL as Sink in ADF | Azure SQL and ADF Event | Data Exposed Special

preview_player
Показать описание
Optimize common data movement scenarios like initial bulk loading, incremental load, or upsert when using Azure SQL as a destination in Azure Data Factory. In this session with Silvano Coriani, we will go through practical examples to illustrate best practices and recommended approaches for data engineers and developers.

✔️ Resources:

📌 Let's connect:

🔔 Subscribe to our channels for:

#AzureSQL #AzureDataFactory #AroundtheClock
Рекомендации по теме
Комментарии
Автор

Great session. It would have been nice if you have presented azure sql configuration and all other settings including all scripts used in demo. I was looking for such video since long and got it now :)

sanjeevjain
Автор

Great video exploring options....however interesting suggesting using merge statement given merge has been controversial for many years due to it's known and sometimes unpredictable bugs. I've used merge via dynamic sql generation in sproc but looking to go away from it as it occasionally has issues. Update/insert where not exists is far more preferable as a best practice. Would love to have heard more about what upsert setting actually does - you seemed to indicate it's a row-by-row compare but is that really how it works? Curious Microsoft would code something so primitive and unusable in terms of performance. Also curious you mentioned adding clustered columnstore to improve performance. In my experience clustered columnstores do not like updates...what happens is they end up putting pages in uncompressed space and are very costly. Bottom line is - they very much slow down not speed up performance with provisioned IaaS resources so curious what aspects of Azure allow them to do the opposite than expected with Azure sql?

garymelhaff
Автор

Can you please give us the link of all scripts used in demo.

TheRanjeet