Less endurance than cheap SSDs??? WD Red Pro 20TB HDD

preview_player
Показать описание
We take a look at whether WD Red Pro 20TB has lower endurance than cheap M.2 NVMe SSDs. We discuss WD's "Workload Rate" and see how the company is keeping very low ratings across the WD Red Pro lines. We also briefly talk about the low MTBF and UBER rates.

----------------------------------------------------------------------
Amazon Affiliate Links to Upgrades
Note we may earn a small commission if you use these links to purchase a product through them.
----------------------------------------------------------------------

----------------------------------------------------------------------
Where to Find STH
----------------------------------------------------------------------

----------------------------------------------------------------------
Other STH Content Mentioned in this Video
----------------------------------------------------------------------
Рекомендации по теме
Комментарии
Автор

Instead of being scared each time about where the "gotcha" is hidden in the WD spec sheet and when will it bite me, I just go Exos.

drchanas
Автор

This reminds me of the cheapification of the Black drives in the 2009-ish timeframe; WD decided to disable TLER on the black drives to force RAID users to go to the "Enterprise" models, which of course cost 2x-3x as much per-TB. It was well-known in those days that the only real difference between these product lines were tweaked firmwares. I immediately took a disliking to WD and started using HGST and Samsung drives.

the_beefy
Автор

I have kind of lost trust in all disks from all manufacturers so now I just buy a drive with CMR and long warranty and hope for the best (and use data integrity checks and backups). Last disk bought was a Seagate Exos enterprise drive and it has been really good so far (and cheap). In practice my drives usually survive a long time.

sunxore
Автор

While I'm no fan of WD, and avoid their products after the SMR concealment, Black series downgrades, and other shady business decisions, I think an accelerated life cycle test is in order. Set up a consumer SSD of your choice and one of these red drives to move data back and forth at full speed and record the metric for time and data moved until they encounter an unrecoverable error. I'm suspicious that the SSD is actually going to die first regardless of what the rating says.

RN
Автор

300TB/year is ~9.5MB/s. I wonder if they have been copy-pasting the stats in those brochures since ATA/66 days.

On a side note, I wish I could get a cheap "consumer grade" 20TB SATA SSD...

MoraFermi
Автор

I am SO disappointed with WD. I bought a couple of 4 tb Iron Wolf drives yesterday. The 6tb WD Red + was only $20 more. That adds up even when buying 2. I just don't trust them anymore. And I'm still running 2 15+;year old Greens. They're going now even though there are still NO errors on either one. It's going to be a mechanical failure any day at this point.

These things do go in cycles. Back when I bought those drives you couldn't trust Seagates. They were failing like crazy. We called it the death crunch. Everyone I knew were having babies. I had panicked people at my door with hard drives in hand begging me try to get all their baby pictures off the drive. Tiger Direct was literally giving away Seagates with system builder copies of Vista just to get rid of them. That was ironic.

kattz
Автор

Doing a monthly smart extended scan and a data Scrub every month is approx 444tb reads per year on a 20tb disk (18.18tb actual), , more if it's a Synology using btrfs with checksum enabled on all share folders, as it first scrubs the btrfs filesystem to verify data and correct if required, once finished a raid rsync is ran

leexgx
Автор

300TB/year is roughly 820GB/day. The ONLY time I remotely approach that number is when I do my full system backup on the 1st of each month. I have a 10TB red drive exclusively for backups. I do incremental daily backups of my OS, software and data drives that rotate with a full backup of each every 14 days. Most days I barely have activity and the drive with the most stuff on it is under 2TB. Of course, compression plays a big part in backup software so the wear is minimized.

If you want wear endurance get a 1.5TB Optane drive - those things will do 30x drive writes per day (45TB every 24 hours) making them phenomenal for a scratch disk, although they command an eye watering price. Alternatively an add in PCIe card with four m.2 sticks could also be doable. The largest U.2 drive at 30TB capacity will even give you a full drive write per day.

And if that's STILL not enough there's always the option to go RAMDISK. It's all about picking the right tools for the job.

Luscious
Автор

I'm happy with my 1TB WD Black and my 500GB Seagate Barracuda, both have around 9 power-on years and the other SMART data still looks good :)
I use both in my Ryzen desktop with three 500GB partitions, with a 2 x 500GB ZFS datapool (Raid-0) and a 500GB ZFS datapool at the end of the 1TB. The Raid-0 datapool has one dataset with copies=2 (Raid-1). To be honest they are supported by L2ARC/ZIL caches (90GB/5GB and 30GB/3GB) on a 128GB sata-SSD and my main stuff is run from a 512GB nvme-SSD. To compensate for the 9 years, I have two backups, one is on my laptop with a new 2TB HDD.

My hobbies are VMs, I have all Windows Releases from 1986 to 2021 and all Ubuntu LTS Releases, the first 4.10, my first 5.04 and the last 22.04 Dev.Ed. The nvme-SSD contain the ~14 most used VMs. The striped datapool contains a dataset with the ~24 VMs still receiving updates and a Raid-1 dataset with my music; photos; family videos etc. The last datapool contains ancient archives of e.g 16-bit software and the frozen ~30 VMs :) Everything is lz4 compressed with a compression ratio of ~2.

Since June 2019 my second backup is on a 2003 Pentium 4 HT (3.0GHz) with 1.21TB of HDDs, two 3.5" IDE (250+320GB) and two 2.5" SATA-1 (320+320GB). They have 2 to 4 power-on years and are powered on for 1 to 2 hours/week running 32-bits FreeBSD 13.0 on OpenZFS 2.0 :) :) The Ryzen 3 2200G desktop runs a minimal install of Ubuntu 21.10 on OpenZFS 2.0.

bertnijhof
Автор

Awesome channel, wish I'd found it earlier. Saw it in a Jeff geerling video

Autotrope
Автор

After getting stung by WD (Whirling Death) with 800 faulty drives failing in a load of video servers over a 6 week period in 2005 avoided them ever since. Not a fun 6 weeks trying to recover from that.

nicktoale
Автор

I use gold even for my home pc it's not that more expensive ! I work in a Datacenter and the most drives that are scraped daily are ironwolfs and not WD ....

HHX_H
Автор

here's some speak like a lawyer is in the room: Rated for UP TO 300TB. also means if the part fails within the first 5TB. it still met the rating. there is no minimum rating guaranteed. now that said, I haven't heard any complaints about the warranty not be honored if the part fails early, but I also don't dig through HDD forums unless I'm researching what to buy for the NAS.

ilovefunnyamvnd
Автор

Thanks for the great info. That purple light on the side of your face was very distracting.

jamesdk
Автор

The WD Red didn't include that limit previously!
Just search "wd red spec sheet 2015"
Is there some kind of flash cache like on hybrid HDDs ? This could be the #1 explanation of that limit...
Maybe there are actually using SMR and hide it like that !?

cmuller
Автор

If you pay for a higher endurance drive, do you get better technology or does the money go into a fund to pay out claims when there are failures inside the warranty period?

AndrewHelgeCox
Автор

Seems Seagate Ironwolf Pro HDDs are rated for 300 TB/Y too, and they call that "built tough". Not a WD issue, it's an industry issue seemingly. You need to get a comment from the manufacturers because there are some issues that need to be addressed. Most users will not have an issue because they will use the drives without issue beyond the warranty period anyway.

maxwellsmart
Автор

Several years ago Tech Report (I think it was them) did an endurance test where they hammered several SATA SSDs with continuous writes until they died. (Most were MLC drives but a couple were TLC, btw QLC didn't exist yet I think.) Most of them went way past their published endurance spec, with the longest lasting one being a ~250GB Samsung 840 Pro which lasted about 2.4 PB I think.

I'd like to see someone repeat the same test, this time with both premium MLC/TLC NVMe SSDs (including enterprise ones with like tens of petabytes of endurance), as well as the basic budget QLC SSDs (with only a few hundred TB of endurance on a >1TB drive),
and this time add hard drives to the mix as well - everything from the top-end datacenter drives (from Seagate, Toshiba AND Western Digital) all the way down to the most basic consumer drives, like the WD Green/Blue or Seagate Barracuda lines. Run the hard drives until they die - not just when they get bad sectors, but run them until they no longer spin up, or have the "click of death", or something like that.

I wonder which would have more endurance - both in terms of total writes relative to their capacity, and in terms of how long they can stay continuously writing until they die? :)

pianoplayerkey
Автор

WD is just way too shady of a company these days. They burned me with that whole "WD Red drives are CMR!" Lie, and that was the last straw for me. I'll never use another WD drive ever again.

bluegizmo
Автор

Great video! Great information and excellent conclusions!

thedanyesful
welcome to shbcf.ru