From: Marc G. <gr...@at...> - 2009-06-08 09:19:31
|
Hi Klaus, On Thursday 28 May 2009 09:17:04 Klaus Steinberger wrote: > Hi, > > because of some testing of a samba/ctdb Cluster on top of OSR i tweaked > around with the Posix Locking Rate Limit of gfs_controld. The default > for plock_rate_limit is "100", which is quite low. > > I raised the limit now on one of my SL 5.3/GFS OSR clusters (a virtual > one) to 10000, which seems to be higher than the real rate (I measured > up to 2200 / sec using ping_pong ). > > I see a very interesting side effect: > > The startup of nodes seems to be quite faster now. Especially the ever > slow udev startup after changing root runs as a charm. > > So maybe somebody could try to confirm that? > > > To raise plock_rate_limit put the following line into > /etc/cluster/cluster.conf: > > <gfs_controld plock_rate_limit="10000"/> > > Please be aware that gfs_controld cannot be restarted or reconfigured on > a running node! A node have to be rebooted to change plock_rate_limit. > > Sincerly, > Klaus I've looked at your idea and could prove it also. Although I couldn't get udev to take very long I referred to a testprogram we've writen for those purposes and could get _VERY_ impressive results. The program just creates files that are flocked like this: int flock_files() { char* filename; int fd; int i; int count; for (i=0; i<count; i++) { filename=malloc(50*sizeof(char)); sprintf(filename, "%s/test-%i-%i", dir, pnumber, i); //printf("filename: %s/test-%i-%i\n", dir, pnumber, i); fd=open(filename,O_SYNC|O_RDWR|O_CREAT,0644); flock(fd, LOCK_EX); //close(fd); } } These are the results on a two node gfs cluster virtualized on Xen: Short sum up is: With default settings to create 10000 fcntl-locks with 10 processes it takes ~20 seconds and with plock_rate_limit=10000 it takes ~2 seconds. Description: 1.1 Default plock_rate_limit: <cluster config_version='3' name='axqad106'> <!-- <gfs_controld plock_rate_limit="10000"/>--> <clusternodes> ... </cluster> 1.2.1 Node1: time /atix/projects/com.oonics/nashead2004/management/comoonics-benchmarks/write_files /tmp/test 100 1000 10 5 Process 10 fcntl-locking 1000 files real 0m19.497s user 0m0.008s sys 0m0.064s 1.2.2 Node2: time /atix/projects/com.oonics/nashead2004/management/comoonics-benchmarks/write_files /tmp/test 100 1000 10 5 Process 10 fcntl-locking 1000 files real 0m19.974s user 0m0.000s<?xml version='1.0' encoding='UTF-8'?> sys 0m0.044s 2.1 plock_rate_limit=10000 <cluster config_version='2' name='axqad106'> <gfs_controld plock_rate_limit="10000"/> <clusternodes> ... </cluster 2.2.1 Node1 time /atix/projects/com.oonics/nashead2004/management/comoonics-benchmarks/write_files /tmp/test 100 1000 10 5 Process 10 fcntl-locking 1000 files real 0m2.205s<?xml version='1.0' encoding='UTF-8'?> user 0m0.000s sys 0m0.080s 2.2.2 Node2 time /atix/projects/com.oonics/nashead2004/management/comoonics-benchmarks/write_files /tmp/test 100 1000 10 5 Process 10 fcntl-locking 1000 files real 0m2.217s user 0m0.000s sys 0m0.068s That's something (factor ~ 1:10) I think. The consequence would be a recommendation for applications using a huge amount of posix-locks/fcntl-locks (e.g. samba -> tdb files) to play with those settings. Again thanks very much for pointing this out. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |