From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98D75C83F1A for ; Mon, 14 Jul 2025 20:38:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9F0B06B00B2; Mon, 14 Jul 2025 16:37:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 956E06B00B3; Mon, 14 Jul 2025 16:37:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 756606B00B4; Mon, 14 Jul 2025 16:37:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6183A6B00B2 for ; Mon, 14 Jul 2025 16:37:57 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 3486512C219 for ; Mon, 14 Jul 2025 20:37:57 +0000 (UTC) X-FDA: 83664031794.30.927BE57 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 7581340005 for ; Mon, 14 Jul 2025 20:37:55 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=V3klSXx8; spf=pass (imf27.hostedemail.com: domain of bfoster@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bfoster@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752525475; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KqHgUuQA1jBGiQ06v6mPe/43vp2nKpHbQvnd9LGBvkI=; b=nXRheO77AZw509NoANaaZTaodaitEf5dVx2QxiSlfNdjW3Mu0dUS86UCEaCAOsKp2rmrvM yPI+hGp60BPQ0Rw2rRkC35CzBWyjiNIMJStq945xafMKLBnmkNXoWxEwUw0RyIs0twEwuI OKt4k7JD16vbju/z7fEjst8XPPniy/E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752525475; a=rsa-sha256; cv=none; b=IjPwGinI9ZptbQST5kZqfcU3pNCD8cymIW70WvxN1zn6PyJ9mNUPWAVf7UwtKAw6TgkV4z 6MsGlQNPZt0qG5pSUUrVs4aSq+uHcmU4w6CZ4KFaVi3bhUX7q0qFAGuYw8jMWb3ZvFUxde P4lV29A8Yf4QJsqAbO4FnK/K7Tml7t8= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=V3klSXx8; spf=pass (imf27.hostedemail.com: domain of bfoster@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bfoster@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752525474; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KqHgUuQA1jBGiQ06v6mPe/43vp2nKpHbQvnd9LGBvkI=; b=V3klSXx8sCBnfMX4RLe39zM+nI9mEdSQey8Vnytnh/ucxwHNmGYrHuT202qNmoDrm1qPgD RdXLobkuazFyCesRQnetN2B7vBEqf4oL1RA3g82c/8Hi0ewLIOlJ+nMZlOvphS8LX2CrCx XSbNLEKCrqeafUzvEqfVAEFCXyMLa4s= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-596-qjtN3F3rNsmnCY0NqcX6Ag-1; Mon, 14 Jul 2025 16:37:49 -0400 X-MC-Unique: qjtN3F3rNsmnCY0NqcX6Ag-1 X-Mimecast-MFC-AGG-ID: qjtN3F3rNsmnCY0NqcX6Ag_1752525468 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5839F1956089; Mon, 14 Jul 2025 20:37:47 +0000 (UTC) Received: from bfoster.redhat.com (unknown [10.22.64.43]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 0CA1819560A3; Mon, 14 Jul 2025 20:37:45 +0000 (UTC) From: Brian Foster To: linux-fsdevel@vger.kernel.org Cc: linux-xfs@vger.kernel.org, linux-mm@kvack.org, hch@infradead.org, djwong@kernel.org, willy@infradead.org Subject: [PATCH v3 3/7] iomap: optional zero range dirty folio processing Date: Mon, 14 Jul 2025 16:41:18 -0400 Message-ID: <20250714204122.349582-4-bfoster@redhat.com> In-Reply-To: <20250714204122.349582-1-bfoster@redhat.com> References: <20250714204122.349582-1-bfoster@redhat.com> MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Rspamd-Queue-Id: 7581340005 X-Rspam-User: X-Rspamd-Server: rspam09 X-Stat-Signature: n4atyzzpiawg9js1caq9eebuzs4hrb3t X-HE-Tag: 1752525475-278302 X-HE-Meta: U2FsdGVkX190sC8UlUbczNmGsRirtUDO0MlAV8OLX2kqopiLfRs5kAT5Q2vf0oLoPOyuiPix00qdIKNLI0Dk1PX+Ye1F7k+j8KUQBC7fzQv5M2J4WMCpkHVDJcANpN1SakQwc6M4KD7D2TpLN8MIuL7zkAENSR3v5eQFqE3TGyiypjUe8up0w/vgvIzHsMY8M9igLJpUs7Ugw/JHR+5yiB1hAep843pDqjpVX2UYhbJTaalj//yY/e8H7pfvxeBrAELkObveDmjziWWlOelzzLgb6hlJlOZRc/atTac/zQeuvzsFHggTd0yryTW8fluv81Rgm5mXKHqnHTQCdCN8NJDC14d5DkRRX28uAUFx1P2lJcnCdxXFRsJg7Q1GztQBHS5Ktt9NuxffZ/M8wj+CBwtVNuDGa16AU0hYOkncRAj9HAOZdbbEnRSvdM8CNqXY2zORPYrB3X1u9kc6XPcivuf5KYO0echP47EnT/xRn+Qam22e3638qpSxSP5D3cI6Vfqzl6NpUms0Thp02BY1zgGAUoNzSLWmsi2phEanW/i4oU8Sm0umiYm+fDtBiTQgUsMtb8ZLXDOeWROoGZZzsBzhqsKOySGhUBHJ3cwyp2okW05FleccdcSmjOGCEKOdllEC7Axjo9aoAX8wlQNT4oivXgSkwbokkmFLiV78V3E4d973Z2ZLv8aJZx7OtE58fIVsPow9yj/0XLP0hHTyX+N/6/anRONyVvqOIMxBWOz+UhdNXciJ69kfg8toit2L9jT14Wmz00tJsH2doGwiCy/MCFwMhk9St/uNdybHeTsOkNf14uwKXfT/O4ww4sd1xgPkAOHJSFWZ4uHDqhXSZbIPXHcxuvUlpUTai4NDorX0mRvvCK4S1l53ISlFii5gjBnvWA5s9UiWQPSy/PzIO+1jd+a0eAQxmEUerihFDL4LFxIANNWf2bSgXM5YUOW6zz2RrBauX7gbL3k1C3H sxHJynCe 5gYRFCeyXMtgm+fqaIdoCHK2mOFlzxFZSklyfTAdK65O8/+PlM5BCEh7x3FYx34HTm2zRqc/UYM2CHWk4q2wwOWCm0GXNxEuf4+FggpApQc4fUYtL1cWVi4lfwt29Pm2wb7PAIs+lGwsNLep4SZQJU0ojxV2pSbu9qTdfPyf4s9rKUhkj/+a8k2CA0JcKq2I6kdxAQvVymW9reNHBdG1Dv5vTYDpF3IjpB118fep4LGnyG143pop/2m/n0vXlkXUcjZdCg8fuY9Gz0m9v2jKMBR4zij+PZ9Zi5j4cZPcl8PXy7NoV/b24iIdmDQeoSS/5p804rZ5rNNCv6nw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The only way zero range can currently process unwritten mappings with dirty pagecache is to check whether the range is dirty before mapping lookup and then flush when at least one underlying mapping is unwritten. This ordering is required to prevent iomap lookup from racing with folio writeback and reclaim. Since zero range can skip ranges of unwritten mappings that are clean in cache, this operation can be improved by allowing the filesystem to provide a set of dirty folios that require zeroing. In turn, rather than flush or iterate file offsets, zero range can iterate on folios in the batch and advance over clean or uncached ranges in between. Add a folio_batch in struct iomap and provide a helper for fs' to populate the batch at lookup time. Update the folio lookup path to return the next folio in the batch, if provided, and advance the iter if the folio starts beyond the current offset. Signed-off-by: Brian Foster Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 89 +++++++++++++++++++++++++++++++++++++++--- fs/iomap/iter.c | 6 +++ include/linux/iomap.h | 4 ++ 3 files changed, 94 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 38da2fa6e6b0..194e3cc0857f 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -750,6 +750,28 @@ static struct folio *__iomap_get_folio(struct iomap_iter *iter, size_t len) if (!mapping_large_folio_support(iter->inode->i_mapping)) len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos)); + if (iter->fbatch) { + struct folio *folio = folio_batch_next(iter->fbatch); + + if (!folio) + return NULL; + + /* + * The folio mapping generally shouldn't have changed based on + * fs locks, but be consistent with filemap lookup and retry + * the iter if it does. + */ + folio_lock(folio); + if (unlikely(folio->mapping != iter->inode->i_mapping)) { + iter->iomap.flags |= IOMAP_F_STALE; + folio_unlock(folio); + return NULL; + } + + folio_get(folio); + return folio; + } + if (folio_ops && folio_ops->get_folio) return folio_ops->get_folio(iter, pos, len); else @@ -811,6 +833,8 @@ static int iomap_write_begin(struct iomap_iter *iter, struct folio **foliop, int status = 0; len = min_not_zero(len, *plen); + *foliop = NULL; + *plen = 0; if (fatal_signal_pending(current)) return -EINTR; @@ -819,6 +843,15 @@ static int iomap_write_begin(struct iomap_iter *iter, struct folio **foliop, if (IS_ERR(folio)) return PTR_ERR(folio); + /* + * No folio means we're done with a batch. We still have range to + * process so return and let the caller iterate and refill the batch. + */ + if (!folio) { + WARN_ON_ONCE(!iter->fbatch); + return 0; + } + /* * Now we have a locked folio, before we do anything with it we need to * check that the iomap we have cached is not stale. The inode extent @@ -839,6 +872,21 @@ static int iomap_write_begin(struct iomap_iter *iter, struct folio **foliop, } } + /* + * The folios in a batch may not be contiguous. If we've skipped + * forward, advance the iter to the pos of the current folio. If the + * folio starts beyond the end of the mapping, it may have been trimmed + * since the lookup for whatever reason. Return a NULL folio to + * terminate the op. + */ + if (folio_pos(folio) > iter->pos) { + len = min_t(u64, folio_pos(folio) - iter->pos, + iomap_length(iter)); + status = iomap_iter_advance(iter, &len); + if (status || !len) + goto out_unlock; + } + pos = iomap_trim_folio_range(iter, folio, poffset, &len); if (srcmap->type == IOMAP_INLINE) @@ -1377,6 +1425,12 @@ static int iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) if (iter->iomap.flags & IOMAP_F_STALE) break; + /* a NULL folio means we're done with a folio batch */ + if (!folio) { + status = iomap_iter_advance_full(iter); + break; + } + /* warn about zeroing folios beyond eof that won't write back */ WARN_ON_ONCE(folio_pos(folio) > iter->inode->i_size); @@ -1398,6 +1452,26 @@ static int iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) return status; } +loff_t +iomap_fill_dirty_folios( + struct iomap_iter *iter, + loff_t offset, + loff_t length) +{ + struct address_space *mapping = iter->inode->i_mapping; + pgoff_t start = offset >> PAGE_SHIFT; + pgoff_t end = (offset + length - 1) >> PAGE_SHIFT; + + iter->fbatch = kmalloc(sizeof(struct folio_batch), GFP_KERNEL); + if (!iter->fbatch) + return offset + length; + folio_batch_init(iter->fbatch); + + filemap_get_folios_dirty(mapping, &start, end, iter->fbatch); + return (start << PAGE_SHIFT); +} +EXPORT_SYMBOL_GPL(iomap_fill_dirty_folios); + int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, const struct iomap_ops *ops, void *private) @@ -1426,7 +1500,7 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, * flushing on partial eof zeroing, special case it to zero the * unaligned start portion if already dirty in pagecache. */ - if (off && + if (!iter.fbatch && off && filemap_range_needs_writeback(mapping, pos, pos + plen - 1)) { iter.len = plen; while ((ret = iomap_iter(&iter, ops)) > 0) @@ -1442,13 +1516,18 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, * if dirty and the fs returns a mapping that might convert on * writeback. */ - range_dirty = filemap_range_needs_writeback(inode->i_mapping, - iter.pos, iter.pos + iter.len - 1); + range_dirty = filemap_range_needs_writeback(mapping, iter.pos, + iter.pos + iter.len - 1); while ((ret = iomap_iter(&iter, ops)) > 0) { const struct iomap *srcmap = iomap_iter_srcmap(&iter); - if (srcmap->type == IOMAP_HOLE || - srcmap->type == IOMAP_UNWRITTEN) { + if (WARN_ON_ONCE(iter.fbatch && + srcmap->type != IOMAP_UNWRITTEN)) + return -EIO; + + if (!iter.fbatch && + (srcmap->type == IOMAP_HOLE || + srcmap->type == IOMAP_UNWRITTEN)) { s64 status; if (range_dirty) { diff --git a/fs/iomap/iter.c b/fs/iomap/iter.c index 6ffc6a7b9ba5..89bd5951a6fd 100644 --- a/fs/iomap/iter.c +++ b/fs/iomap/iter.c @@ -9,6 +9,12 @@ static inline void iomap_iter_reset_iomap(struct iomap_iter *iter) { + if (iter->fbatch) { + folio_batch_release(iter->fbatch); + kfree(iter->fbatch); + iter->fbatch = NULL; + } + iter->status = 0; memset(&iter->iomap, 0, sizeof(iter->iomap)); memset(&iter->srcmap, 0, sizeof(iter->srcmap)); diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 522644d62f30..0b9b460b2873 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -9,6 +9,7 @@ #include #include #include +#include struct address_space; struct fiemap_extent_info; @@ -239,6 +240,7 @@ struct iomap_iter { unsigned flags; struct iomap iomap; struct iomap srcmap; + struct folio_batch *fbatch; void *private; }; @@ -345,6 +347,8 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len); bool iomap_dirty_folio(struct address_space *mapping, struct folio *folio); int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len, const struct iomap_ops *ops); +loff_t iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t offset, + loff_t length); int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, const struct iomap_ops *ops, void *private); int iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, -- 2.50.0