Tuesday Tech Tip - Tuning and Benchmarking for your Workload with Ceph

preview_player
Показать описание
Each Tuesday, we will be releasing a tech tip video that will give users information on various topics relating to our Storinator storage servers.

Chapters:
0:00 Introduction
0:33 Introduction and Recap
2:10 What is included in the demonstration?
3:34 Demo agenda
7:34 Disks benchmarking
10:24 Network testing
16:05 System tuning

Well, Mitch's last video on demystifying benchmarking hit over 100 likes, so this week he keeps his promise and goes more in-depth with this topic.

The previous video focused on performance characteristics for different workloads and architectural considerations for ZFS to improve your performance.

In this week's video, Mitch talks about tuning and benchmarking your storage workload with Ceph. This video is part 1 of a multi-part video series centered around Ceph. Check it out as Mitch discusses how to optimize your Ceph environment with specific tuning tips to ensure optimized performance on your workload before running your benchmarks.

Be sure to watch next Tuesday, when we give you another 45 Drives tech tip
Рекомендации по теме
Комментарии
Автор

Hey Mitch, thanks for getting started on this video series so fast. I would like to see tuning for VM workloads first. So RBD. Thanks! Keep up the great work!

BillHughesWeehooey
Автор

Great start, looking forward to the rest of this series. For me RBD peformance is most important.

ThorbjoernWeidemann
Автор

First, congratulations and thank you for getting this Ceph Cluster benchmark series kicked off.

I really like the way you started this: being a networked FS, we really need to ensure the foundation of the Ceph Cluster - the CPU, Network and the Discs meet our requirements. The insight about disabling higher C States for CPU power saving was very helpful.

I can see that a lot of us here are using Ceph to back VM workloads and yes, iSCSI setup, tuning and benchmarks would really be appreciated!

I might have missed a detail - to get those nice 10Gbps numbers, was custom network tuning done Eg - jumboframes (and maybe a discussion of the pros and cons)?

subhobroto
Автор

This is great. I'm going to take a look at tuning the ceph cluster we built at work. Would love to see a top end version of this with an all NVMe setup.

zacks
Автор

Vm workloads and iscsi, hopefully with vmware as well., nfs (not vms) pretty please

pewpewpew
Автор

I am debating on running CEPH again on our 7 node ProxMox cluster. Right now it's running ZFS with replication to keep things simple. I'm more into HA in keeping all the VMs running when a node fails. I've ran CEPH before on older version of ProxMox three years ago and had serious performance issues when it rebalances. I am thinking of having 4 compute nodes and 3 CEPH nodes just to separate this out instead of all 7 nodes running the VMs and CEPH at the same time on dedicated 10 gig network. I won't be using iSCSI so it'll all be through RBD. Any thoughts?

Darkk
Автор

I tried to tune our cluster with tuned network-latency profile on PowerEdge R730xd Dell servers running Ubuntu 20.04.4 LTS and our CPUs started to overheat from 50 °C to 65 °C and increase so I decided to switch back to default balanced profile. C states was empty and Busy% was 100% on every single core.

jozefrebjak