Tuesday Tech Tip - Highly Available NFS

preview_player
Показать описание
Each Tuesday, we will be releasing a tech tip video that will give users information on various topics relating to our Storinator storage servers.

This week, Brett talks about why you need highly available NFS. This video will walk you through the process to configure a highly available NFS service in front of your CephFS cluster.

Be sure to watch next Tuesday, when we give you another 45 Drives tech tip.
Рекомендации по теме
Комментарии
Автор

Is this video for professionals that know the material or for tech enthusiast that don't understand the technology?

BrianThomas
Автор

That was a really nice tutorial! Is Mario Kart a discount coupon to buy hardware(2 for 1)?

NFS HA on VMWare = totally a must

So hardware would be minimum 2 x AV15 + an external ceph server? Blog post about this?

CapitaineWEB-FR
Автор

FAILOVER TIME IS 5 MIN !! Kindly advice.

Hello Team, we also trying the same setup and are using git-hub code for ceph-nfs into our setup, but what we see here is that it takes around 5 min to switchover from one active to another during failover. In our environment of Ceph Cluster(version 15.2.7) we are trying to use NFS HA Mode.
Mode:"Active/Passive HA NFS Cluster"

When we are using Active/Passive HA Config for NFS Server using Corosync/Pacemekar:
1. configuration is done and we are able to perform fail-over, but when an active
node is tested with power-off/service-stop, we observe:
1.1 : I/O operations gets stuck for around 5 minute and then it resumes although the
handover from active to other standby node happens immediately once the node is
powered-off/service is stopped.

Ganesha.conf:


Ceph version: 15.2.7
NFS Ganesha : 3.3


Ganesha Conf:

# Please do not change this file directly since it is managed by Ansible and will be
overwritten


NFS_Core_Param
{
Enable_NLM = false;
Enable_RQUOTA = false;
Protocols = 3, 4;
}

EXPORT_DEFAULTS {
Attr_Expiration_Time = 0;
}

CACHEINODE {
Dir_Chunk = 0;
NParts = 1;
Cache_Size = 1;
}

RADOS_URLS {
ceph_conf = '/etc/ceph/ceph.conf';
userid = "admin";
watch_url =
}
NFSv4 {
RecoveryBackend = 'rados_ng';


}


RADOS_KV {
ceph_conf = '/etc/ceph/ceph.conf';
userid = "admin";
pool = "nfs_ganesha";
namespace = "ganesha-grace";
nodeid = "cephnode2";
}

%url

LOG {
Facility {
name = FILE;
destination =
enable = active;
}


}
EXPORT
{
Export_id=20235;
Path =
Pseudo = /conf;
Access_Type = RW;
Protocols = 3, 4;
Transports = TCP;
SecType = sys, krb5, krb5i, krb5p;
Squash = No_Root_Squash;
Attr_Expiration_Time = 0;
FSAL {
Name = CEPH;
User_Id = "admin";
}
}
EXPORT
{
Export_id=20236;
Path =
Pseudo = /opr;
Access_Type = RW;
Protocols = 3, 4;
Transports = TCP;
SecType = sys, krb5, krb5i, krb5p;
Squash = No_Root_Squash;
Attr_Expiration_Time = 0;
FSAL {
Name = CEPH;
User_Id = "admin";
}
}

learn_by_example
Автор

I was wondering what your nfs.yml ansible-playbook file looks like.

OneSmokinJoe
Автор

Thanks, what's this web interface for Ceph that you are using? Does Ceph come with its own interface?

mateuszbieniek
Автор

15 seconds downtime is way too slow for me :<

GrandmasterPoi
Автор

I have ceph nautilus and cephfs configured. I want nfs with cephfs . How to modify nfs playbook considering cephfs mounted on /mnt/mycephfs

MustafizurRahman-pyvu