Compressed Backup - Performance - SQL in Sixty Seconds 196

preview_player
Показать описание

0:00 Intro
0:12 Real-World Scenario
0:40 Demo: Compressed Backup - Performance
2:10 Demo: Compressed Restore - Performance
2:55 Summary
3:10 My Opinion

Join this channel to get access to perks:

#SQLServer #PerformanceTuning #SQLAuthority
Рекомендации по теме
Комментарии
Автор

A few years ago, I had a customer who was taking uncompressed backups. The databases were quite large, and they had a huge Disk IO issue.
One time we were forced to restore from the last backup, and the restore failed due to timeout.
I had to transfer the backup to my own server which had better performance and no IO issues, where the backup restored fine.
I then proceeded to take a compressed backup of their data from my server, and transfer it back.
The compressed backup restored without any issues.

Ever since then, I always recommend compressed backups to all my customers.

creamore
Автор

We are handling huge database and hence this feature of compressed backup is such that we cannot even think our life without it. The storage, network bandwidth, cloud sharing and many more having direct impact on compressed backup. In the world of SSD instead of normal Sata harddisks, it makes big difference as well.

Rageshshah
Автор

I believe the same thing will be applicable for striped backups as well or anything interesting to discuss in specific

pillasudheer
Автор

This is a well-produced video, but I have a problem with a blanket recommendation to use backup compression or not just based on these results. The correct decision will depend on your data, your hardware, and storage infrastructure. With the current Microsoft backup compression algorithm, compressing the data before it is written to backup file(s) causes some CPU overhead while the backup is running.

If the logical drive(s) where the backup files are being written to have relatively low sequential write performance, then the actual elapsed time for the backup will usually be significantly lower because less data is being written. The sequential write performance is the bottleneck, which is a fairly common scenario.

If the destination for the backup file(s) has very good sequential write performance, then compressing the backup can make the elapsed time significantly higher. This is not as common with real database servers, but it definitely can happen. People should test both ways in their environments to see what kind of results they get.

GlennBerrySQL
Автор

I have a question if you can help -- I have oracle linked server and we run an hrly job which drops and insert into temp table from Oralce open query which normally completes in 3-4 minutes - but some time job will hang, the only solution to stop the hanged job to close TCP session and re-run the job (I am unable to find the cause) and this happens 2-3 times in a week -- any thoughts ?

afzaalawan
Автор

Hi Pinal!
One thing to improve Compressed Backup throughput is to create the backup with multiple files to take benefit of parallel compression. At least i observed that in a (lazy) test i've performed.

Edit to add test results:

Environment:
SQL 2019 Developer
StackOverflow database modified by Brent Ozar (2018 i think...) - 160 GB allocated
Fast Disk (PCI Gen 4)
Backups taken to same drive that datafiles.

Results:
NOT COMPRESSED - SINGLE FILE:
BACKUP DATABASE successfully processed 20932266 pages in 94.620 seconds (1728.316 MB/sec).
160 Gb bak file

COMPRESSED - SINGLE FILE:
BACKUP DATABASE successfully processed 20932266 pages in 304.049 seconds (537.851 MB/sec).
47 GB bak file

NOT COMPRESSED - 4 FILES:
BACKUP DATABASE successfully processed 20932266 pages in 119.746 seconds (1365.668 MB/sec).
4 x 40 GB bak files

COMPRESSED - 4 FILES:
BACKUP DATABASE successfully processed 20932266 pages in 116.200 seconds (1407.343 MB/sec).
4 x near 12 GB bak files

So, as Steve Earle said, it all seems to depend on your involved hardware.

pablodominguezruiz
Автор

Is there a flexibility to specify the compression percentage while taking backup in SQL server?

krishnaki
Автор

Compressed vs Uncompressed becomes really important when you are shipping your backups to the cloud over a network. Backup and restore times I don’t care about 🤔.

Would also be interesting to add verify options (both in terms of during the backup and RESTORE VERIFYONLY) 🤔🤔

MiningForPies
Автор

Try to compare backups of big databases, f.e. 100 GB. My own experience tells that on a big database the compressed backup is faster than uncompressed.

xren
Автор

I've been testing the compressed backups on TDE DBs (2016, defining maxtransfersize > 64k), but only seeing ~20% decrease in file size. These DBs are over 300gb, some much larger. Is there a point where the original file size is just too much for the backup compression?

rebekahw
Автор

I'm intrigued by this as I've always thought compressed was quicker.

I've just done some tests on AdventureWorks 2014:

to C:\ no compression: 390MB/s
to C:\ with compression: 214MB/s
to UNC path no compression: 2.4MB/s
to UNC path with compression: 9.5MB/s

I guess it depends what the particular bottlenecks are (Network will have IO bottleneck Which means the IO saving makes compression faster in that scenario)

steveearle
Автор

Does compression significantly affect the CPU load? Or maybe faster write to disk causes more CPU usage? I'm curious about it.

jabkowy
Автор

In my opinion, performance does matter with backups.

if we are working on a Data Warehouse system in Simple recovery model which runs large updates / inserts, if the backup takes longer, these operations can really increase the size of the log file

steveearle