From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28733C83F17 for ; Tue, 15 Jul 2025 16:31:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 64DA46B0088; Tue, 15 Jul 2025 12:31:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FED06B0089; Tue, 15 Jul 2025 12:31:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 514706B0092; Tue, 15 Jul 2025 12:31:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 435056B0088 for ; Tue, 15 Jul 2025 12:31:02 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 986B41A052D for ; Tue, 15 Jul 2025 16:31:01 +0000 (UTC) X-FDA: 83667038322.24.D55246F Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf16.hostedemail.com (Postfix) with ESMTP id 6FB0A180008 for ; Tue, 15 Jul 2025 16:30:59 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=RhayR7Dp; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf16.hostedemail.com: domain of djwong@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=djwong@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752597059; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6HXsYGNmkp3KFd/l1muDVtRfB4bRJtoumpO9SDbLpeM=; b=hcYhkIJOoVzFzEMvVbmAhVO56J1B4sxxlBDVmf+71MSeIXsRlc401H1T3l94jh67av77Yp ft+d8+TDuXyAWYLofpIFY1u85M1wcM0eJqFyRVg+mLzAZi+oqSepwb0DlFMG0ZzACgoMx9 6fJ3VCfGKv0udl8pqmf8T7fW1Rg2kGI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752597059; a=rsa-sha256; cv=none; b=LpAwFL2ILashFo7Ftk20rWFoSKPRX7aE3fo8saUQwXTqd8fsTXkxPsDJxJUd5BblIEB1qq g1LG1KZxmF1xjgu3HLxi2g7OD3sFOrgQuu7gsH1oz8Ghs5+1GvjJ+niyu301U2DH9ch+TP Jb6Bui7QEpm7zGrMGeB6YsG5FDqVGZY= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=RhayR7Dp; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf16.hostedemail.com: domain of djwong@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=djwong@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 5664D43F98; Tue, 15 Jul 2025 16:30:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2A3C3C4CEE3; Tue, 15 Jul 2025 16:30:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1752597058; bh=boqYpnyjQkrnOLC5/e/LhE8fcrdUQRoDXg4rbAi8Tic=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=RhayR7DpK5GaiEctLq2Vpx92vMICy3+cmIoQXsIsYM98vRGeJvQMBEcoOK27BinYU CLHx2zmUQvUJew1UstlfkOO53PfpcTXIbeqAl++QnD/YyHZS9aSWAjQrXm33D37kab LneWhUTyHxRREgXPuFoSzaQKV9u9zUH4wRJXaQyUUaupM3yt9tRYJQ1ofwFAb2Hz4u CCACbT4g9Z/6chKnbP0WJxAWBnLdvRaP78nJOBFYPhcVzQAMwvM4KfALuceTFp3ECA d9IvQOGQhcwXOfwppkluzwsQ7l/nqR6PApSn/ODqqNNSF5Zt30zejsKr2mbgapk2uY Gv1NLAxRcAR5g== Date: Tue, 15 Jul 2025 09:30:57 -0700 From: "Darrick J. Wong" To: Brian Foster Cc: linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, hch@infradead.org, willy@infradead.org Subject: Re: [PATCH v3 6/7] iomap: remove old partial eof zeroing optimization Message-ID: <20250715163057.GC2672049@frogsfrogsfrogs> References: <20250714204122.349582-1-bfoster@redhat.com> <20250714204122.349582-7-bfoster@redhat.com> <20250715053417.GR2672049@frogsfrogsfrogs> <20250715143733.GO2672029@frogsfrogsfrogs> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: fh6ndmz4udrihj5ap67esyhcwcxxixyp X-Rspamd-Queue-Id: 6FB0A180008 X-Rspamd-Server: rspam10 X-Rspam-User: X-HE-Tag: 1752597059-698369 X-HE-Meta: U2FsdGVkX18NOe6Vs9D2hz4K1u5efgVHeoru03XBq0U5URCAPyVKLnHtLTzzx2nPpTw2sntUKzYXyYS9wXitFXVZI33y9l7K6e60o9sKpGM2Vb6rzssMDCIvSMuC/0NyONIX4dDqLZfHSR2lJeQ6tT3EFsrRIzPmymS9lmqMXZjFP1GJEXlK04oVLfgE/+45aiEWZMMzfcj8nx+efqblsCubzzYhOgl4XPH6ST8TSmvwj5gZ5iR0U/3rRma8V0jHAjeOre9U+ilWohpPSAwzM3IO9QSW2zZaZOhsStYuMD+pmVKHWGB7wi/4f46+5kWXtvWa4X7TONIu6E0NUL5jeeXQikKvUYpgmr0bNUt0GJ4SjSz9wD4TFrXUmKI2u8jfCs7j52VZ1y2BAFBF3PWyOM5+6zJfcsQkUwqpBZAQQsN/vllt//rrIburzVyOXAuaZAe/Xo/e3bvY8AE0cvbSffAnf5P48TraQVLP7CGdbHVyReXrDN+7dZCtB6y6a6ZsezlX3baxbVyAWveLR7IrH6Nt8Y5v0sWrzJZrj8mRmPe8GcvUF50si7CFjbGiyb6FZitrUp5Y3y/WCbEHszcOYb03ABQROgijpBYtqOflBwMxEuQFp1Ej7uGsFv0nkphWToHpFpa8ti2BPYF7UiNzHjsa+PJ61l09s2Xxo271/L/QNkMUE41xpGIOndldIReizm65Zxac5UO4UpB4782ldKVCvlnLotzUPnCtULkJS+s5ENSpj07/DIYwqsqCoX6TrPapKEX+vZcLX3VDjCYqYHptd59ZrQzpugGtqXot7yoMfRqTFYf89p79ZuRmuFT0ilmtq0KKdk09+G9aLcb8g+9V7AOeTwMlwinU/FZQ2v9KR1goQJA324PPAMwyx4kaBSjREbQbkZb1ZsHxUvnAnYOcvOLysyi4z3uZX5tqo8YDwHywR7QdkxxjMhdl2x8Jma0juDlPSLbKVrGIThV /KA7bhfs G86QD2D5QAbJakTPtjPbpyI+GWDeK6BQp4BXK0+mfEFePfGfZHdGO3ULAJBXX1Y+A1tbznzoQbaBXZW2q4AWGkzOQ8T+c3KhfepoDCoMHdLQb3tv5j9MSDD7gPLGhSQ2u68/fDrJQkLHquyLzIaSDka4Yb8Piql883++1byD3TedTnT9Gxq2/ENfSZH7Dp9Ez4uC42AttK8v8jGT2lIwINfKzn6AGkMCSobSHw5W3LhFV3zVEhaVVQo+6aA9r69RJk17BsASNgcP0aRZsV5w0wA3A3ry9j8w+fbYIhe9MOmgouwM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jul 15, 2025 at 12:20:14PM -0400, Brian Foster wrote: > On Tue, Jul 15, 2025 at 07:37:33AM -0700, Darrick J. Wong wrote: > > On Tue, Jul 15, 2025 at 08:36:54AM -0400, Brian Foster wrote: > > > On Mon, Jul 14, 2025 at 10:34:17PM -0700, Darrick J. Wong wrote: > > > > On Mon, Jul 14, 2025 at 04:41:21PM -0400, Brian Foster wrote: > > > > > iomap_zero_range() optimizes the partial eof block zeroing use case > > > > > by force zeroing if the mapping is dirty. This is to avoid frequent > > > > > flushing on file extending workloads, which hurts performance. > > > > > > > > > > Now that the folio batch mechanism provides a more generic solution > > > > > and is used by the only real zero range user (XFS), this isolated > > > > > optimization is no longer needed. Remove the unnecessary code and > > > > > let callers use the folio batch or fall back to flushing by default. > > > > > > > > > > Signed-off-by: Brian Foster > > > > > Reviewed-by: Christoph Hellwig > > > > > > > > Heh, I was staring at this last Friday chasing fuse+iomap bugs in > > > > fallocate zerorange and straining to remember what this does. > > > > Is this chunk still needed if the ->iomap_begin implementation doesn't > > > > (or forgets to) grab the folio batch for iomap? > > > > > > > > > > No, the hunk removed by this patch is just an optimization. The fallback > > > code here flushes the range if it's dirty and retries the lookup (i.e. > > > picking up unwritten conversions that were pending via dirty pagecache). > > > That flush logic caused a performance regression in a particular > > > workload, so this was introduced to mitigate that regression by just > > > doing the zeroing for the first block or so if the folio is dirty. [1] > > > > > > The reason for removing it is more just for maintainability. XFS is > > > really the only user here and it is changing over to the more generic > > > batch mechanism, which effectively provides the same optimization, so > > > this basically becomes dead/duplicate code. If an fs doesn't use the > > > batch mechanism it will just fall back to the flush and retry approach, > > > which can be slower but is functionally correct. > > > > Oh ok thanks for the reminder. > > Reviewed-by: "Darrick J. Wong" > > > > > > My bug turned out to be a bug in my fuse+iomap design -- with the way > > > > iomap_zero_range does things, you have to flush+unmap, punch the range > > > > and zero the range. If you punch and realloc the range and *then* try > > > > to zero the range, the new unwritten extents cause iomap to miss dirty > > > > pages that fuse should've unmapped. Ooops. > > > > > > > > > > I don't quite follow. How do you mean it misses dirty pages? > > > > Oops, I misspoke, the folios were clean. Let's say the pagecache is > > sparsely populated with some folios for written space: > > > > -------fffff-------fffffff > > wwwwwwwwwwwwwwwwwwwwwwwwww > > > > Now you tell it to go zero range the middle. fuse's fallocate code > > issues the upcall to userspace, whch changes some mappings: > > > > -------fffff-------fffffff > > wwwwwuuuuuuuuuuuwwwwwwwwww > > > > Only after the upcall returns does the kernel try to do the pagecache > > zeroing. Unfortunately, the mapping changed to unwritten so > > iomap_zero_range doesn't see the "fffff" and leaves its contents intact. > > > > Ah, interesting. So presumably the fuse fs is not doing any cache > managment, and this creates an unexpected inconsistency between > pagecache and block state. > > So what's the solution to this for fuse+iomap? Invalidate the cache > range before or after the callback or something? Port xfs_flush_unmap_range, I think. --D > Brian > > > (Note: Non-iomap fuse defers everything to the fuse server so this isn't > > a problem if the fuse server does all the zeroing itself.) > > > > --D > > > > > Brian > > > > > > [1] Details described in the commit log of fde4c4c3ec1c ("iomap: elide > > > flush from partial eof zero range"). > > > > > > > --D > > > > > > > > > --- > > > > > fs/iomap/buffered-io.c | 24 ------------------------ > > > > > 1 file changed, 24 deletions(-) > > > > > > > > > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > > > > > index 194e3cc0857f..d2bbed692c06 100644 > > > > > --- a/fs/iomap/buffered-io.c > > > > > +++ b/fs/iomap/buffered-io.c > > > > > @@ -1484,33 +1484,9 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, > > > > > .private = private, > > > > > }; > > > > > struct address_space *mapping = inode->i_mapping; > > > > > - unsigned int blocksize = i_blocksize(inode); > > > > > - unsigned int off = pos & (blocksize - 1); > > > > > - loff_t plen = min_t(loff_t, len, blocksize - off); > > > > > int ret; > > > > > bool range_dirty; > > > > > > > > > > - /* > > > > > - * Zero range can skip mappings that are zero on disk so long as > > > > > - * pagecache is clean. If pagecache was dirty prior to zero range, the > > > > > - * mapping converts on writeback completion and so must be zeroed. > > > > > - * > > > > > - * The simplest way to deal with this across a range is to flush > > > > > - * pagecache and process the updated mappings. To avoid excessive > > > > > - * flushing on partial eof zeroing, special case it to zero the > > > > > - * unaligned start portion if already dirty in pagecache. > > > > > - */ > > > > > - if (!iter.fbatch && off && > > > > > - filemap_range_needs_writeback(mapping, pos, pos + plen - 1)) { > > > > > - iter.len = plen; > > > > > - while ((ret = iomap_iter(&iter, ops)) > 0) > > > > > - iter.status = iomap_zero_iter(&iter, did_zero); > > > > > - > > > > > - iter.len = len - (iter.pos - pos); > > > > > - if (ret || !iter.len) > > > > > - return ret; > > > > > - } > > > > > - > > > > > /* > > > > > * To avoid an unconditional flush, check pagecache state and only flush > > > > > * if dirty and the fs returns a mapping that might convert on > > > > > -- > > > > > 2.50.0 > > > > > > > > > > > > > > > > > > > > > > > >