Tuesday Tech Tip - Sizing Large Ceph Clusters

preview_player
Показать описание
Each Tuesday, we will be releasing a tech tip video that will give users information on various topics relating to our Storinator storage servers.

This week, Brett talks about sizing large, multi-petabyte Ceph clusters. Specially, Brett gives you tips on how to start storing all of this data so you are getting the best bang for your storage dollars in the long-term.

Be sure to watch next Tuesday, when we give you another 45 Drives tech tip.
Рекомендации по теме
Комментарии
Автор

As a thought, you could still start a a single-node cluster with EC 8+2 and an OSD-level failure domain. At this scale, the redundancy level is similar to traditional SAN solutions where you can't afford to lose a whole chassis. Unlike a traditional SAN, a two-disk failure will rebalance onto a smaller set of disks, rather than waiting for a spare to be inserted.
Once the cluster hits the 5-node mark, you can change the CRUSH rules to provide the equivalent of host-level redundancy with a rule along the lines of "choose 5 hosts, choose 2 HBA's from each, choose one OSD" to simulate a 4+1 scheme at the cost of some performance overhead. At this point, you can experience a complete node failure without losing data.
Once you hit the 10-node mark, you can replace the "5 hosts 2 OSD's" CRUSH rule with a simpler "choose 10 hosts" rule, and stop worrying until you hit a bigger failure domain, such as rack-level or row-level.

NTmatter
Автор

If you store multi-petabytes, I also hope you get a 6 figures salary to go with it!

fbaillargeon
Автор

So my main question is - will 45drives sponsor LinusTechTips new server because his 1 petabyte server is already full?

joncepet
welcome to shbcf.ru