NVMe SSD Hardware RAID is Back! Broadcom MegaRAID 9670W-16i RAID Card Review

preview_player
Показать описание
When Broadcom asked us to review their latest SSD RAID card, Kevin and Brian looked at each other as if they were transported back in time. With GPUs gaining ground in hardware RAID and a bevy of SDS options that don't need hardware RAID, the thought of "legacy" cards being back for NVMe SSDs seemed bizarre.

The guys were shocked and impressed though with the performance of the 9670W with 16x Micron 7450 SSDs. The MegaRAID 9670W-16i takes things right up to the x16 PCIe Gen4 limit of 28GB/s in read and offer up to 13GB/s in RAID5 in write bandwidth. On the throughput side, random 4K read performance topped out at 7M IOPS with write spanning from 1 to 2.1M IOPS between RAID5 and RAID10.

Full Review -

Discord Server -

@MicronTechnology @broadcom @SupermicroPage
Рекомендации по теме
Комментарии
Автор

Thank you for this amazing review. Those raid 5 number look really good

thatLion
Автор

Thank god. Nice to get NVMe raid cards. You were going to have to pull my HP P822 from my cold dead hands prior to seeing these guys.

JordansTechJunk
Автор

I just came here after checking if I can use RAID for Proxmox, and figured out RAID can also be done on hardware level.
Also, the first video when I searched "hardware RAID" said something along the lines of "Why Hardware RAID is a bad idea in 2022".

Just looking for some redundancy for my low-end private NAS from old parts, this is probably absolute overkill for my use case, lol.

LinkEX
Автор

nice, but a couple issues with these tri-mode/nvme raid cards. 1) there are no sff-8612 (x8) to sff-8611 (x2 or x1) breakout cables so you can gang more drives together (basically it's only x4 which really limits what you can do without dedicated pcie backplanes which are very custom). Generally with nvme raid I'm looking more towards 4k random iops not streaming as streaming vary rarely exists in block request datastreams.

Next would like to see raid6 performance numbers in addition to just raid 10 / raid5. The main point of raid6 here w/ nvme is the same as with traditional storage and with the sizes of nvme drives as they are they have the exact same issues as traditional storage that you need to check for bit flips with such large drives (>~4TB in size). It's not as robust as zfs which does the checking at read&write per record, but as you mentioned these are deployed in business servers which need to run OS's/apps that do not support zfs for primary storage.

All in all, it's great that broadcom is updating their raid chipsets. There still needs some more improvement with parity calcs or perhaps also supporting T10 DIF (or T10 PI) for nvme (which was included in nvme 1.2.1 specs but haven't seen anyone use/support).

stevekristoff
Автор

so it roughly halved the drive performance over soft raid? all 16 were used in each array or was this testing 8s? were the jbod through the controller or through direct connect? is this basically the dell perc 12?

johnwuethrich
Автор

would be nice to know:
how many drives you used for these?
are all the listed numbers through the controller?
the micron 7450 7.6 TB is rated for single drive 6800R and 5600W which divides out to 4.12 drives read for raid 10 and 5 and 1.81 drives for write.
1m 4k reads and 215k random 4k writes.

guessing you have 8 drives here?

how does it preform with 4? is 8 the max?

is this similar to the perc 12?

johnwuethrich
Автор

Hi, would you suggest this for a RAID5 (4x 3.84TB PM9A3 Samsung SSDs) or another model? @StorageReview

aakashrajwani
Автор

Just to clarify this was a single 9670W-16i card? Could the card have connected to the backplane of the Supermicro rather than the JBOFs?

devonnoonan
Автор

@StorageReview

Can the 9650 be connected to a 24nvme backplane, i.e Chenbro 238 - or is nvme passive for now, meaning for 24nvme bays iI need 2 raid cards, 16i and 8i

brainthesizeofplanet
Автор

Could u also test the Adaptec 3254 with 16 channels?

brainthesizeofplanet
Автор

What SM server model did you try this in? SM doesn’t offer a raid card for Nvme.

antiquebowieknifechannel
Автор

Can I know what is the JBOF you are using? I got the card but the performance of each NVMe is pathetic at 300MB/s using direct attach U.2 <-> SlimSas cable.

CH-skho
Автор

Impressive results. While I disagree in general about hardware raid I still found the video informative. I'm also a little mad at Dell and Perc at the moment. Just had a client that had their RAID 1 boot destroyed because the PERC choose the wrong SSD to Mirror and tried to overwrite the other SSD until it literally killed it. So maybe I'm biased.

michaelrichardson
Автор

The significant performance loss on raid mode is because of no trim/unmap?

For nvme, I'd rather use this raid card as pcie expander, set the drives in jbod mode and use software raid so that trim can work

mzamroni
Автор

I dare you to send me 2 of those 30tb ssds 😤

Agent_Crimson
Автор

Very interesting video - thanks for explaining everything - so an old man like me can understand - LoL !!

creed