From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bob Peterson Date: Fri, 28 Sep 2018 09:59:48 -0400 (EDT) Subject: [Cluster-devel] [PATCH 0/2] GFS2: inplace_reserve performance improvements In-Reply-To: <2750068.OGLyu1R2mJ@dhcp-3-135.uk.xensource.com> References: <1537455133-48589-1-git-send-email-mark.syms@citrix.com> <48f4bf35-814a-0efd-2a68-efe705acf923@redhat.com> <2750068.OGLyu1R2mJ@dhcp-3-135.uk.xensource.com> Message-ID: <1019518653.16911354.1538143188726.JavaMail.zimbra@redhat.com> List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit ----- Original Message ----- > I think what's happening for us is that the work that needs to be done to > release an rgrp lock is happening pretty fast and is about the same in all > cases, so the stats are not providing a meaningful distinction. We see the > same lock (or small number of locks) bouncing back and forth between nodes > with neither node seeming to consider them congested enough to avoid, even > though the FS is <50% full and there must be plenty of other non-full rgrps. > > -- > Tim Smith Hi Tim, Interesting. I've done experiments in the past where I allowed resource group glocks to take advantage of the "minimum hold time" which is today only used for inode glocks. In my experiments it's made no appreciable difference that I can recall, but it might be an interesting experiment for you to try. Steve's right that we need to be careful not to improve one aspect of performance while causing another aspect's downfall, like improving intra-node congestion problems at the expense of inter-node congestion. Regards, Bob Peterson