Setting up a new Ceph cache pool for better performance

preview_player
Показать описание
This video discusses how to set up a Ceph cache pool and tier your cache to improve reads and writes. While we could cover many cache settings, I will focus on the most important values you can set on a pool to cache your data effectively.

Write up:

Unlock unlimited opportunities with 50% off your first month of Coursera Plus

Get on the fast track to a career in cybersecurity. In this certificate program, you'll learn in-demand skills, and get AI training from Google experts. Learn at your own pace, no degree or experience required.

Join the channel to get access to more perks:

Or visit my blog at:

Outro music: Sanaas Scylla

#ceph #cache #pool
Рекомендации по теме
Комментарии
Автор

I really wish that ceph developers didn't seem hell-bent on removing/deprecating/telling everyone NOT to use this feature. After using several enterprise commercial SANs in my career that use both spinning rust and SSDs - because we're not billionaires and no, we can't afford all-flash, Pure! Stop sending me marketing crap for all-flash SANs! - anyway, it just seems crazy to me that ceph doesn't want to put more development hours into making cache tiering BETTER rather than warning everyone away from it. Why wouldn't I want a pool of SSDs capturing incoming writes at high speed, then demoting it down to the HDDs in the background? This functionality is built into every single SAN of the past 15 years that has multiple speeds of drive in them. Ceph is so good and so functional, and they have a tiering feature... yet they want people to stop using it, rather than improving it? I just don't get it.

heiseheise
Автор

I think can separately test the HDD and SSD cache tiering + HDD

Exalted
Автор

A Red Hat certified Enterprise course didn't even cover this, even when directly asked. Excellent video.

cmarotta
Автор

It was a nice video but it'd have been nice to see statistics about the change in performance

andbuitra
Автор

Nice video Daniel! Is caching preferred over journal disks or am I mixing apples and oranges?

galle
Автор

Will Ceph automatically use ssd drives/osds for the hot_storage pool you have created or we have to associate SSDs with that particular pool?

deba
Автор

Thank you for the guide. I am a ceph beginner.

At 3:48 how does ceph know to put the hot_storage pool in the ssd osd (instead of in the hdd osd). Is this automatic? I already have 2 ssd and 2 hdd setup as osd in the host.

ap
Автор

Hi again. Do you know what the default time is for the cache_min_flush_age if you don't mually set it? You set it to 10 mins. How can I reset it to the default or none? Thanks

CJ-vgtg
Автор

What do you think would happen in a Read-Intensive pool if the criteria you set for Data to become promoted to Cache in local RAM far exceeds the amount of local RAM? Do these caching settings care if the Pools are Replicated or Erasure Coded ?

cmarotta
Автор

hi Daniel, in writeback mode, do you directly write data to the cachepool or write to the basepool first and then promote_object() to promote the object to the cachepool?

abonu
Автор

Is it possible to use ONE cache pool for multiple pools with data?

piotrgocza
Автор

Hi! Can you tell me, please, if the storage pool contains rbd images, all the data inside rbd will be moved to the cache tier? Thanks

studentofmgy
Автор

Hiya. Thanks for the great videos. I do have a doubt about cache tiers. If I have 3 hdd and 3 ssd disks, would it be best to use 1 ssd for the cache tier pool and 3hdd+2ssd for the data pool(set as cold hdd storage)? Thanks

CJ-vgtg
Автор

how do you force hot_storage to only use the NVME disks ?

damiendye
Автор

In proxmox, with your guide I have created SSD caching for my HDD pool.

Which pool do I add as the storage pool for vms, the HDD pool?

ap
Автор

Hi, can u help me to fix some problems with ceph on proxmox? How can I contact you?

mazter