From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89227C83F21 for ; Tue, 15 Jul 2025 12:31:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1D2BA6B00B7; Tue, 15 Jul 2025 08:31:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1839E6B00B8; Tue, 15 Jul 2025 08:31:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 099856B00B9; Tue, 15 Jul 2025 08:31:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EB3F16B00B7 for ; Tue, 15 Jul 2025 08:31:32 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 9CE1058F9F for ; Tue, 15 Jul 2025 12:31:32 +0000 (UTC) X-FDA: 83666434824.28.C5D2A3E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id E50EC4000D for ; Tue, 15 Jul 2025 12:31:30 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JOdH0+8K; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of bfoster@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=bfoster@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752582691; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UOgmhl3Zn6HcJrfIdIDiBcLblOViKBMcruF6AR3/HMc=; b=Xx21EVA2Hvn3jt10KnQoezBz9IKmaxRMlQ4muOwrn2AdBAMcvIaNt7aJ2WyhyLu8r3LiC3 SHtgoY9qwIoLlXpwFThIlXP4bg7lyyQXnzalI+4fwOW9c9j6nn4b14kuq+4P3NMHAheGXA 8TfzrXoR60KybS3bUFN3PjLKM5t1TVo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752582691; a=rsa-sha256; cv=none; b=BedSBEbVZvoa0e/ZuqPtoyRSQ/B8LjQHfgXazRoY3RnDdngQVTb4Zy1l9FXNqBh2SFv7MS 2F1nvtgY750SqeVOHYwDaBQbYo0gusKF1j8g/LnDaO1qNF59bf/0/VhOfVZkGBWLq/qH6Z rveeVfzcGDJYTs/ob39IeHad8IAoCSY= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JOdH0+8K; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of bfoster@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=bfoster@redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752582690; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=UOgmhl3Zn6HcJrfIdIDiBcLblOViKBMcruF6AR3/HMc=; b=JOdH0+8K2EuRxLxZ/7vURxlmrASRUQZC5mkDMQC+zb380NEmsM7aCx24eEBFHEL5uvPQ7E ire92t4xYIurO5virFpcIK0aEqqy1P7pZG0XwoLPVFIJZ7lrjtLt5PxL6awFWmwza4kz5s gvVsLzf27uC8qprprbDaE9rEEg0URPI= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-467-z5mIH-zBM86hm6Q8fXYXyA-1; Tue, 15 Jul 2025 08:31:28 -0400 X-MC-Unique: z5mIH-zBM86hm6Q8fXYXyA-1 X-Mimecast-MFC-AGG-ID: z5mIH-zBM86hm6Q8fXYXyA_1752582683 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 30CEB1956089; Tue, 15 Jul 2025 12:31:23 +0000 (UTC) Received: from bfoster (unknown [10.22.64.43]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8AA221977029; Tue, 15 Jul 2025 12:31:21 +0000 (UTC) Date: Tue, 15 Jul 2025 08:35:03 -0400 From: Brian Foster To: "Darrick J. Wong" Cc: linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, hch@infradead.org, willy@infradead.org Subject: Re: [PATCH v3 3/7] iomap: optional zero range dirty folio processing Message-ID: References: <20250714204122.349582-1-bfoster@redhat.com> <20250714204122.349582-4-bfoster@redhat.com> <20250715052259.GO2672049@frogsfrogsfrogs> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250715052259.GO2672049@frogsfrogsfrogs> X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Rspamd-Queue-Id: E50EC4000D X-Stat-Signature: g1q1cd94yso5qb9uzbnud66z8eyimga7 X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1752582690-854430 X-HE-Meta: U2FsdGVkX1+w202mfJJOjF6urnrfSPbmom1Wb4KDI+ImCMCsxLCOL6hkCyohbhkahwMqDMPRhepRswy53RW25PK1FLRx5GGOE2O63oueU2w6aauLTvECpEqaBUG6zgRq3xYPB9jUIzCwSR1xbVvD6Di2GqJWBOHEUc5itmYkzlarqBrwxw9w5N1iOdbxx/zY/GgcTCVXKUgBL4yvukecADW0JJV2k0DC/caRLDaFoPvPuuiEWyqnRStPuDAQvKG0w5Fq/1fC7j5LTYm4Ly1eo1ldkOiEjHTKp3ET4vxwAshJZTO9wMmNpdiJwvRos8a4a7QX6PY/H6iJp76LsIKIN67F49FpgszmW3DZLUTy9ejhytTRir1PfG6AjIH8Io4H0QmN6VNxW2gPDGNagRSBNmbkut9IgwyhNB3sPD+nqo1XAMaP7ZjWAQXj7dUIvdg/nVLVVY04HxMSFQfqthF5EucF0CtCvCm9BJZiJox4+9/70XdAv/MZh44HDKNGvPst5BbmaV9huUap5yivc0Ocv/f0m1W/BClyIFjFy2+WOrNV/sHveBA70M5IyXJK1o8ioJyv8f/+I3OV2uyG8w0ub11Bl+BpWYOLM5822fgGD7/L5nZLhh/zS9CRZY5PhC3NFtei+pHpoRuHA+KukYTizYV8Q0fab2lrT59Q/n4FekDGzPuSSuziWid714JX2mT41393ClPXdKFZCwPT5z9YiLFrjBjAFhzkCIvFOBU98ADtsGMBbCp5rRM37QUHeVuh6LAXc5/QyM2hGbW50Wy1kTVj4Iy8NB8pkht/ZVdcsvXrwzt9OvDqmxpzcCu9nYv3HURSIuOzPegy8D16P59euGROnsgnN8bioer/g+wgJ6SeqzJILSpdfeavXOrzgHJ+ADNfLohh4kOa1G3yBKSNyceFzyI7K3e7aQJagzhnXncnI35F0uhWusoFDVPzTdQXI8ZRbntDZusuzkCu6+G twGK9K+u daizpoyLeTnDkeA6R+odOOL8tjuzsjaxBcFDW3PlXfQzAmPr+FUlL4MhEpS8doXRWAeQT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jul 14, 2025 at 10:22:59PM -0700, Darrick J. Wong wrote: > On Mon, Jul 14, 2025 at 04:41:18PM -0400, Brian Foster wrote: > > The only way zero range can currently process unwritten mappings > > with dirty pagecache is to check whether the range is dirty before > > mapping lookup and then flush when at least one underlying mapping > > is unwritten. This ordering is required to prevent iomap lookup from > > racing with folio writeback and reclaim. > > > > Since zero range can skip ranges of unwritten mappings that are > > clean in cache, this operation can be improved by allowing the > > filesystem to provide a set of dirty folios that require zeroing. In > > turn, rather than flush or iterate file offsets, zero range can > > iterate on folios in the batch and advance over clean or uncached > > ranges in between. > > > > Add a folio_batch in struct iomap and provide a helper for fs' to > > /me confused by the single quote; is this supposed to read: > > "...for the fs to populate..."? > Eh, I intended it to read "for filesystems to populate." I'll change it to that locally. Brian > Either way the code changes look like a reasonable thing to do for the > pagecache (try to grab a bunch of dirty folios while XFS holds the > mapping lock) so > > Reviewed-by: "Darrick J. Wong" > > --D > > > > populate the batch at lookup time. Update the folio lookup path to > > return the next folio in the batch, if provided, and advance the > > iter if the folio starts beyond the current offset. > > > > Signed-off-by: Brian Foster > > Reviewed-by: Christoph Hellwig > > --- > > fs/iomap/buffered-io.c | 89 +++++++++++++++++++++++++++++++++++++++--- > > fs/iomap/iter.c | 6 +++ > > include/linux/iomap.h | 4 ++ > > 3 files changed, 94 insertions(+), 5 deletions(-) > > > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > > index 38da2fa6e6b0..194e3cc0857f 100644 > > --- a/fs/iomap/buffered-io.c > > +++ b/fs/iomap/buffered-io.c > > @@ -750,6 +750,28 @@ static struct folio *__iomap_get_folio(struct iomap_iter *iter, size_t len) > > if (!mapping_large_folio_support(iter->inode->i_mapping)) > > len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos)); > > > > + if (iter->fbatch) { > > + struct folio *folio = folio_batch_next(iter->fbatch); > > + > > + if (!folio) > > + return NULL; > > + > > + /* > > + * The folio mapping generally shouldn't have changed based on > > + * fs locks, but be consistent with filemap lookup and retry > > + * the iter if it does. > > + */ > > + folio_lock(folio); > > + if (unlikely(folio->mapping != iter->inode->i_mapping)) { > > + iter->iomap.flags |= IOMAP_F_STALE; > > + folio_unlock(folio); > > + return NULL; > > + } > > + > > + folio_get(folio); > > + return folio; > > + } > > + > > if (folio_ops && folio_ops->get_folio) > > return folio_ops->get_folio(iter, pos, len); > > else > > @@ -811,6 +833,8 @@ static int iomap_write_begin(struct iomap_iter *iter, struct folio **foliop, > > int status = 0; > > > > len = min_not_zero(len, *plen); > > + *foliop = NULL; > > + *plen = 0; > > > > if (fatal_signal_pending(current)) > > return -EINTR; > > @@ -819,6 +843,15 @@ static int iomap_write_begin(struct iomap_iter *iter, struct folio **foliop, > > if (IS_ERR(folio)) > > return PTR_ERR(folio); > > > > + /* > > + * No folio means we're done with a batch. We still have range to > > + * process so return and let the caller iterate and refill the batch. > > + */ > > + if (!folio) { > > + WARN_ON_ONCE(!iter->fbatch); > > + return 0; > > + } > > + > > /* > > * Now we have a locked folio, before we do anything with it we need to > > * check that the iomap we have cached is not stale. The inode extent > > @@ -839,6 +872,21 @@ static int iomap_write_begin(struct iomap_iter *iter, struct folio **foliop, > > } > > } > > > > + /* > > + * The folios in a batch may not be contiguous. If we've skipped > > + * forward, advance the iter to the pos of the current folio. If the > > + * folio starts beyond the end of the mapping, it may have been trimmed > > + * since the lookup for whatever reason. Return a NULL folio to > > + * terminate the op. > > + */ > > + if (folio_pos(folio) > iter->pos) { > > + len = min_t(u64, folio_pos(folio) - iter->pos, > > + iomap_length(iter)); > > + status = iomap_iter_advance(iter, &len); > > + if (status || !len) > > + goto out_unlock; > > + } > > + > > pos = iomap_trim_folio_range(iter, folio, poffset, &len); > > > > if (srcmap->type == IOMAP_INLINE) > > @@ -1377,6 +1425,12 @@ static int iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) > > if (iter->iomap.flags & IOMAP_F_STALE) > > break; > > > > + /* a NULL folio means we're done with a folio batch */ > > + if (!folio) { > > + status = iomap_iter_advance_full(iter); > > + break; > > + } > > + > > /* warn about zeroing folios beyond eof that won't write back */ > > WARN_ON_ONCE(folio_pos(folio) > iter->inode->i_size); > > > > @@ -1398,6 +1452,26 @@ static int iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) > > return status; > > } > > > > +loff_t > > +iomap_fill_dirty_folios( > > + struct iomap_iter *iter, > > + loff_t offset, > > + loff_t length) > > +{ > > + struct address_space *mapping = iter->inode->i_mapping; > > + pgoff_t start = offset >> PAGE_SHIFT; > > + pgoff_t end = (offset + length - 1) >> PAGE_SHIFT; > > + > > + iter->fbatch = kmalloc(sizeof(struct folio_batch), GFP_KERNEL); > > + if (!iter->fbatch) > > + return offset + length; > > + folio_batch_init(iter->fbatch); > > + > > + filemap_get_folios_dirty(mapping, &start, end, iter->fbatch); > > + return (start << PAGE_SHIFT); > > +} > > +EXPORT_SYMBOL_GPL(iomap_fill_dirty_folios); > > + > > int > > iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, > > const struct iomap_ops *ops, void *private) > > @@ -1426,7 +1500,7 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, > > * flushing on partial eof zeroing, special case it to zero the > > * unaligned start portion if already dirty in pagecache. > > */ > > - if (off && > > + if (!iter.fbatch && off && > > filemap_range_needs_writeback(mapping, pos, pos + plen - 1)) { > > iter.len = plen; > > while ((ret = iomap_iter(&iter, ops)) > 0) > > @@ -1442,13 +1516,18 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, > > * if dirty and the fs returns a mapping that might convert on > > * writeback. > > */ > > - range_dirty = filemap_range_needs_writeback(inode->i_mapping, > > - iter.pos, iter.pos + iter.len - 1); > > + range_dirty = filemap_range_needs_writeback(mapping, iter.pos, > > + iter.pos + iter.len - 1); > > while ((ret = iomap_iter(&iter, ops)) > 0) { > > const struct iomap *srcmap = iomap_iter_srcmap(&iter); > > > > - if (srcmap->type == IOMAP_HOLE || > > - srcmap->type == IOMAP_UNWRITTEN) { > > + if (WARN_ON_ONCE(iter.fbatch && > > + srcmap->type != IOMAP_UNWRITTEN)) > > + return -EIO; > > + > > + if (!iter.fbatch && > > + (srcmap->type == IOMAP_HOLE || > > + srcmap->type == IOMAP_UNWRITTEN)) { > > s64 status; > > > > if (range_dirty) { > > diff --git a/fs/iomap/iter.c b/fs/iomap/iter.c > > index 6ffc6a7b9ba5..89bd5951a6fd 100644 > > --- a/fs/iomap/iter.c > > +++ b/fs/iomap/iter.c > > @@ -9,6 +9,12 @@ > > > > static inline void iomap_iter_reset_iomap(struct iomap_iter *iter) > > { > > + if (iter->fbatch) { > > + folio_batch_release(iter->fbatch); > > + kfree(iter->fbatch); > > + iter->fbatch = NULL; > > + } > > + > > iter->status = 0; > > memset(&iter->iomap, 0, sizeof(iter->iomap)); > > memset(&iter->srcmap, 0, sizeof(iter->srcmap)); > > diff --git a/include/linux/iomap.h b/include/linux/iomap.h > > index 522644d62f30..0b9b460b2873 100644 > > --- a/include/linux/iomap.h > > +++ b/include/linux/iomap.h > > @@ -9,6 +9,7 @@ > > #include > > #include > > #include > > +#include > > > > struct address_space; > > struct fiemap_extent_info; > > @@ -239,6 +240,7 @@ struct iomap_iter { > > unsigned flags; > > struct iomap iomap; > > struct iomap srcmap; > > + struct folio_batch *fbatch; > > void *private; > > }; > > > > @@ -345,6 +347,8 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len); > > bool iomap_dirty_folio(struct address_space *mapping, struct folio *folio); > > int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len, > > const struct iomap_ops *ops); > > +loff_t iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t offset, > > + loff_t length); > > int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, > > bool *did_zero, const struct iomap_ops *ops, void *private); > > int iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, > > -- > > 2.50.0 > > > > >