From mboxrd@z Thu Jan 1 00:00:00 1970 From: Al Viro Subject: [PATCH] fix ufs write vs. readpage race when writing into a hole Date: Wed, 9 Sep 2015 10:16:39 +0100 Message-ID: <20150909091639.GM22011@ZenIV.linux.org.uk> References: <20150906025504.GH22011@ZenIV.linux.org.uk> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org To: Linus Torvalds Return-path: Received: from zeniv.linux.org.uk ([195.92.253.2]:55371 "EHLO ZenIV.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751443AbbIIJQl (ORCPT ); Wed, 9 Sep 2015 05:16:41 -0400 Content-Disposition: inline In-Reply-To: <20150906025504.GH22011@ZenIV.linux.org.uk> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Followup to UFS series - with the way we clear the new blocks (via buffer cache, possibly on more than a page worth of file) we really should not insert a reference to new block into inode block tree until after we'd cleared it. Signed-off-by: Al Viro --- diff --git a/fs/ufs/balloc.c b/fs/ufs/balloc.c index fb8b54e..dc5fae6 100644 --- a/fs/ufs/balloc.c +++ b/fs/ufs/balloc.c @@ -417,14 +417,14 @@ u64 ufs_new_fragments(struct inode *inode, void *p, u64 fragment, if (oldcount == 0) { result = ufs_alloc_fragments (inode, cgno, goal, count, err); if (result) { + ufs_clear_frags(inode, result + oldcount, + newcount - oldcount, locked_page != NULL); write_seqlock(&UFS_I(inode)->meta_lock); ufs_cpu_to_data_ptr(sb, p, result); write_sequnlock(&UFS_I(inode)->meta_lock); *err = 0; UFS_I(inode)->i_lastfrag = max(UFS_I(inode)->i_lastfrag, fragment + count); - ufs_clear_frags(inode, result + oldcount, - newcount - oldcount, locked_page != NULL); } mutex_unlock(&UFS_SB(sb)->s_lock); UFSD("EXIT, result %llu\n", (unsigned long long)result);