From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 882D6C761A6 for ; Mon, 3 Apr 2023 08:35:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231336AbjDCIfF (ORCPT ); Mon, 3 Apr 2023 04:35:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231444AbjDCIfD (ORCPT ); Mon, 3 Apr 2023 04:35:03 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 775DB30C1 for ; Mon, 3 Apr 2023 01:35:02 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E895C615E6 for ; Mon, 3 Apr 2023 08:35:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0836BC433D2; Mon, 3 Apr 2023 08:35:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1680510901; bh=KHNTUICZF9JbBNvtjIUh6sb+X3bP2t6z6FZYsPPXM9M=; h=Subject:To:Cc:From:Date:From; b=bxWAxgvX/i+X1WdxlcYCztFEtZWN3VNKDLWDN5JkDWir6gOgnDJqJycUpoOVzmeR4 vbWq3Z7+7+AYiNIoGhT/dU/nUEdVS5dJpzagoS1c2I9eQ49ZG/yXGwM4hAPwGeZy4f 0llHSX+d9bx5yGnBJJjxw9NbeWe00V0Ad2+EWdrw= Subject: FAILED: patch "[PATCH] zonefs: Always invalidate last cached page on append write" failed to apply to 5.10-stable tree To: damien.lemoal@opensource.wdc.com, Hans.Holmberg@wdc.com, hans.holmberg@wdc.com, hch@lst.de, johannes.thumshirn@wdc.com Cc: From: Date: Mon, 03 Apr 2023 10:34:58 +0200 Message-ID: <2023040358-blade-preheated-5a2d@gregkh> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch below does not apply to the 5.10-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . To reproduce the conflict and resubmit, you may use the following commands: git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-5.10.y git checkout FETCH_HEAD git cherry-pick -x c1976bd8f23016d8706973908f2bb0ac0d852a8f # git commit -s git send-email --to '' --in-reply-to '2023040358-blade-preheated-5a2d@gregkh' --subject-prefix 'PATCH 5.10.y' HEAD^.. Possible dependencies: c1976bd8f230 ("zonefs: Always invalidate last cached page on append write") 4008e2a0b01a ("zonefs: Reorganize code") a608da3bd730 ("zonefs: Detect append writes at invalid locations") 8745889a7fd0 ("Merge tag 'iomap-6.0-merge-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux") thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From c1976bd8f23016d8706973908f2bb0ac0d852a8f Mon Sep 17 00:00:00 2001 From: Damien Le Moal Date: Wed, 29 Mar 2023 13:16:01 +0900 Subject: [PATCH] zonefs: Always invalidate last cached page on append write When a direct append write is executed, the append offset may correspond to the last page of a sequential file inode which might have been cached already by buffered reads, page faults with mmap-read or non-direct readahead. To ensure that the on-disk and cached data is consistant for such last cached page, make sure to always invalidate it in zonefs_file_dio_append(). If the invalidation fails, return -EBUSY to userspace to differentiate from IO errors. This invalidation will always be a no-op when the FS block size (device zone write granularity) is equal to the page size (e.g. 4K). Reported-by: Hans Holmberg Fixes: 02ef12a663c7 ("zonefs: use REQ_OP_ZONE_APPEND for sync DIO") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Tested-by: Hans Holmberg diff --git a/fs/zonefs/file.c b/fs/zonefs/file.c index 617e4f9db42e..c6ab2732955e 100644 --- a/fs/zonefs/file.c +++ b/fs/zonefs/file.c @@ -382,6 +382,7 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from) struct zonefs_zone *z = zonefs_inode_zone(inode); struct block_device *bdev = inode->i_sb->s_bdev; unsigned int max = bdev_max_zone_append_sectors(bdev); + pgoff_t start, end; struct bio *bio; ssize_t size = 0; int nr_pages; @@ -390,6 +391,19 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from) max = ALIGN_DOWN(max << SECTOR_SHIFT, inode->i_sb->s_blocksize); iov_iter_truncate(from, max); + /* + * If the inode block size (zone write granularity) is smaller than the + * page size, we may be appending data belonging to the last page of the + * inode straddling inode->i_size, with that page already cached due to + * a buffered read or readahead. So make sure to invalidate that page. + * This will always be a no-op for the case where the block size is + * equal to the page size. + */ + start = iocb->ki_pos >> PAGE_SHIFT; + end = (iocb->ki_pos + iov_iter_count(from) - 1) >> PAGE_SHIFT; + if (invalidate_inode_pages2_range(inode->i_mapping, start, end)) + return -EBUSY; + nr_pages = iov_iter_npages(from, BIO_MAX_VECS); if (!nr_pages) return 0;