From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wendy Cheng Date: Thu, 09 Aug 2007 09:46:44 -0400 Subject: [Cluster-devel] [PATCH 4 of 5] Bz #248176: GFS2: invalid metadata block - REVISED In-Reply-To: <1186609929.25269.46.camel@technetium.msp.redhat.com> References: <1186609929.25269.46.camel@technetium.msp.redhat.com> Message-ID: <46BB1AC4.9030509@redhat.com> List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Bob Peterson wrote: > Part of the problem was that inodes were being recycled > before their buffers were flushed to the journal logs. > > Set aside "after this patch, the problem goes away" thing ... I haven't checked previous three patches yet so I may not have the overall picture ... but why adding the journal flush spin lock here could prevent the new inode to get re-used before its associated buffer are flushed to the logs ? Could you elaborate more ? > diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c > index b93ac45..2d7f7ea 100644 > --- a/fs/gfs2/rgrp.c > +++ b/fs/gfs2/rgrp.c > @@ -865,12 +865,15 @@ static struct inode *try_rgrp_unlink(struct gfs2_rgrpd *rgd, u64 *last_unlinked) > struct inode *inode; > u32 goal = 0, block; > u64 no_addr; > + struct gfs2_sbd *sdp = rgd->rd_sbd; > > for(;;) { > if (goal >= rgd->rd_data) > break; > + down_write(&sdp->sd_log_flush_lock); > block = rgblk_search(rgd, goal, GFS2_BLKST_UNLINKED, > GFS2_BLKST_UNLINKED); > + up_write(&sdp->sd_log_flush_lock); > if (block == BFITNOENT) > break; > My concern is that GFS2's usage of sd_log_flush_lock has been very abused lately. The journal logic is gradually becoming difficult to understand and maintain. With this change, we move a local spin lock (that belongs to log.c) into another sub-component (rgrp). Intuitively, this is not right. -- Wendy