From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bob Peterson Date: Fri, 16 Dec 2016 11:10:22 -0500 (EST) Subject: [Cluster-devel] [GFS2 PATCH] GFS2: Limit number of transaction blocks requested for truncates In-Reply-To: <5853FCCA.8000809@redhat.com> References: <1207853613.13078711.1481897777538.JavaMail.zimbra@redhat.com> <5853FCCA.8000809@redhat.com> Message-ID: <345386777.13237851.1481904622112.JavaMail.zimbra@redhat.com> List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit ----- Original Message ----- | Hi, | | If it can't exceed 8192, then why is it only 256, could there not be a | larger number? It should probably scale with journal size to avoid | causing issues for those with larger journals. The approach looks good | though for a temporary fix, | | Steve. Hi, Yeah, I toyed with the idea of making it scale with the journal, but anything I chose just seemed so...arbitrary. I originally coded it as the number of blocks in the journal - some slop for log headers and such: + if (jblocks_needed + RES_SLOP > sdp->sd_jdesc->jd_blocks) { + jblocks_needed = sdp->sd_jdesc->jd_blocks - RES_SLOP; At that time I started defined it as: +#define RES_SLOP 16 I also toyed with the idea of using half the journal blocks: jblocks_needed = sdp->sd_jdesc->jd_blocks > 1; Or maybe 3/4 of the journal: jblocks_needed = sdp->sd_jdesc->jd_blocks > 1 + sdp->sd_jdesc->jd_blocks > 2; ...but I didn't want to waste a bunch of CPU time doing calculations either: deletes are already slow. Not that that would slow us down much. In the end, I decided an arbitrary 256 was simple and enough for the vast majority of deletes. I'm perfectly happy to use whatever value makes the most sense. Suggestions? Regards, Bob Peterson Red Hat File Systems