From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Subject: invalidate_inode_buffers call in invalidate_list? Date: Sun, 17 Oct 2010 19:39:00 -0400 Message-ID: <20101017233900.GA1525@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: linux-fsdevel@vger.kernel.org To: viro@zeniv.linux.org.uk Return-path: Received: from bombadil.infradead.org ([18.85.46.34]:56985 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751111Ab0JQXjB (ORCPT ); Sun, 17 Oct 2010 19:39:01 -0400 Content-Disposition: inline Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Does anyone remember why we call invalidate_inode_buffers in invalidate_list? These days we basically call invalidate_inode_buffers only in ->evict_inode, which we will call from invalidate_list in case of a zero refcount. For calls to invalidate_inode_buffers it's hamrless, but for other callers it means we lose track of which buffers were dirtied for an inode, and thus can write them out during fsync. Something like the untested patch below should fix it. While we're at it - any reaso not to consider I_NEW inodes as busy in invalidate_list? Index: linux-2.6/fs/inode.c =================================================================== --- linux-2.6.orig/fs/inode.c 2010-10-17 16:33:34.826254213 -0700 +++ linux-2.6/fs/inode.c 2010-10-17 16:34:40.407003969 -0700 @@ -570,18 +570,16 @@ static int invalidate_list(struct super_ spin_unlock(&inode->i_lock); continue; } - invalidate_inode_buffers(inode); - if (!inode->i_ref) { - WARN_ON(inode->i_state & I_NEW); - inode->i_state |= I_FREEING; + if (inode->i_ref) { spin_unlock(&inode->i_lock); - - /* save a lock round trip by removing the inode here. */ - list_move(&inode->i_sb_list, dispose); + busy = 1; continue; } + inode->i_state |= I_FREEING; spin_unlock(&inode->i_lock); - busy = 1; + + /* save a lock round trip by removing the inode here. */ + list_move(&inode->i_sb_list, dispose); } return busy; }