From: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
To: Pankaj Raghav <pankaj.raghav@linux.dev>,
Ojaswin Mujoo <ojaswin@linux.ibm.com>
Cc: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
djwong@kernel.org, john.g.garry@oracle.com, willy@infradead.org,
hch@lst.de, jack@suse.cz, Luis Chamberlain <mcgrof@kernel.org>,
dgc@kernel.org, tytso@mit.edu, andres@anarazel.de,
brauner@kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org,
"Pankaj Raghav (Samsung)" <pankaj.raghav@linux.dev>,
Pankaj Raghav <p.raghav@samsung.com>
Subject: Re: [RFC PATCH v2 2/5] iomap: Add initial support for buffered RWF_WRITETHROUGH
Date: Wed, 22 Apr 2026 12:10:20 +0530 [thread overview]
Message-ID: <5x5jts4r.ritesh.list@gmail.com> (raw)
In-Reply-To: <02ed5bfc-7ebf-41ee-bd8a-c8e030c35bca@linux.dev>
Pankaj Raghav <pankaj.raghav@linux.dev> writes:
> On 4/21/2026 8:15 PM, Ojaswin Mujoo wrote:
>> On Mon, Apr 20, 2026 at 01:56:02PM +0200, Pankaj Raghav (Samsung) wrote:
>>>> +
>>>> + if (wt_ops->writethrough_submit)
>>>> + wt_ops->writethrough_submit(wt_ctx->inode, iomap, wt_ctx->bio_pos,
>>>> + len);
>>>> +
>>>> + bio = bio_alloc(iomap->bdev, wt_ctx->nr_bvecs, REQ_OP_WRITE, GFP_NOFS);
>>>
>>> We might want to check if bio_alloc succeeded here.
>>
>> Hi Pankaj, so we pass GFP_NOFS which has GFP_DIRECT_RECLAIM and
>> according to comment over bio_alloc()
>>
>> * If %__GFP_DIRECT_RECLAIM is set then bio_alloc will always be able to
>> * allocate a bio. This is due to the mempool guarantees. To make this work,
>> * callers must never allocate more than 1 bio at a time from the general pool.
>>
>> And we seem to be following this.
>>
>
> Makes sense. Thanks for the clarification.
>
>>>
>>>> + bio->bi_iter.bi_sector = iomap_sector(iomap, wt_ctx->bio_pos);
>>>> + bio->bi_end_io = iomap_writethrough_bio_end_io;
>>>> + bio->bi_private = wt_ctx;
>>>> +
>>>> + for (i = 0; i < wt_ctx->nr_bvecs; i++)
>>> In the unlikely scenario where we encounter an error, do we have to also
>>> clear the writeback flag on all the folios that is part of this
>>> bvec until now?
>>>
>>> Something like explicitly iterate over wt_ctx->bvec[0] through
>>> wt_ctx->bvec[nr_bvecs - 1], manually call folio_end_writeback(bvec[i].bv_page)
>>> on them, and then discard the bvecs by setting the nr_bvecs = 0;
>>>
>>> I am wondering if the folios that were processed until now will be in
>>> PG_WRITEBACK state which can affect reclaim as we never clear the flag.
>>
>> Hey Pankaj, yes you are right. I think the error handling is a bit buggy
>> and Sashiko has also pointed some of these. I'll take care of this in
>> v3, thanks for pointing this out.
>>
>
> FWIW, I got the following panic on xfs/011 (not reproducible all the time) when
> I was running the xfstests with 16k block size with the writethrough patches:
>
Good point. I don't think so we tested large blocksize path currently.
Ojaswin has been running fsx and fsstress and we didn't hit this
scenario. So yes looks to be some corner path missed.
Thanks for testing that.
> [76313.736356] INFO: task fsstress:1845687 blocked for more than 122 seconds.
> [76313.738751] Not tainted 7.0.0-08885-g97cbd56b7479 #43
> [76313.740650] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
> message.
> [76313.743311] task:fsstress state:D stack:0 pid:1845687 tgid:1845687
> ppid:1845685 task_flags:0x400140 flags:0x00080000
> [76313.747137] Call Trace:
> [76313.748000] <TASK>
> [76313.748830] __schedule+0xcc2/0x3c40
> [76313.750129] ? __pfx___schedule+0x10/0x10
> [76313.751479] ? srso_alias_return_thunk+0x5/0xfbef5
> [76313.753214] schedule+0x78/0x2e0
> [76313.754334] io_schedule+0x92/0x100
> [76313.755597] folio_wait_bit_common+0x26a/0x6f0
> [76313.757156] ? __pfx_folio_wait_bit_common+0x10/0x10
> [76313.758873] ? srso_alias_return_thunk+0x5/0xfbef5
> [76313.760508] ? xas_load+0x19/0x260
> [76313.761693] ? __pfx_wake_page_function+0x10/0x10
> [76313.763386] ? __pfx_filemap_get_entry+0x10/0x10
> [76313.764948] folio_wait_writeback+0x58/0x190
> [76313.766499] __filemap_get_folio_mpol+0x56d/0x800
> [76313.768085] ? kvm_read_and_reset_apf_flags+0x4a/0x70
> [76313.769899] iomap_write_begin+0xea7/0x1e90
> [76313.771304] ? srso_alias_return_thunk+0x5/0xfbef5
> [76313.773016] ? asm_exc_page_fault+0x22/0x30
> [76313.774427] ? __pfx_iomap_write_begin+0x10/0x10
> [76313.776100] ? fault_in_readable+0x80/0xe0
> [76313.777476] ? __pfx_fault_in_readable+0x10/0x10
> [76313.779106] ? srso_alias_return_thunk+0x5/0xfbef5
> [76313.780765] ? balance_dirty_pages_ratelimited_flags+0x549/0xcb0
> [76313.782861] ? srso_alias_return_thunk+0x5/0xfbef5
> [76313.784457] ? fault_in_iov_iter_readable+0xe5/0x250
> [76313.786221] iomap_file_writethrough_write+0x9fd/0x1ce0
Looks like, while we were in iomap_writethrough_iter(), we ended up
looping over the same folio twice w/o submitting the bio.
So this could be a short copy case (written < bytes). I guess, if we
have a short copy, then too we should submit the prepared bio in
iomap_writethrough_iter(), otherwise we will deadlock when we iterate
over the same folio twice. (because previously we changed the folio
state to writeback)
> [76313.787978] ? __pfx_iomap_file_writethrough_write+0x10/0x10
> [76313.789991] ? srso_alias_return_thunk+0x5/0xfbef5
> [76313.791589] ? srso_alias_return_thunk+0x5/0xfbef5
> [76313.793314] ? srso_alias_return_thunk+0x5/0xfbef5
> [76313.794932] ? current_time+0x73/0x2b0
> [76313.796132] ? srso_alias_return_thunk+0x5/0xfbef5
> [76313.797276] ? xfs_file_write_checks+0x420/0x900 [xfs]
> [76313.798786] xfs_file_buffered_write+0x195/0xae0 [xfs]
> [76313.800243] ? __pfx_xfs_file_buffered_write+0x10/0x10 [xfs]
> [76313.801775] ? kasan_save_track+0x14/0x40
> [76313.802843] ? kasan_save_free_info+0x3b/0x70
> [76313.803908] ? __kasan_slab_free+0x4f/0x80
> [76313.804894] ? vfs_fstatat+0x55/0xa0
> [76313.805835] ? __do_sys_newfstatat+0x7b/0xe0
> [76313.806899] ? do_syscall_64+0x5b/0x540
> [76313.807829] ? srso_alias_return_thunk+0x5/0xfbef5
> [76313.809052] ? xfs_file_write_iter+0x22e/0xa80 [xfs]
> [76313.810451] do_iter_readv_writev+0x453/0xa70
>
> I have a feeling this has to do with the error handling as we are stuck waiting
> for writeback to complete. It is not reproducible because it might be dependent
> on the state of the system before this triggers. Let me see if I can find a way
> to reliably reproduce this so that we have something to verify against once we
> make these changes.
>
Maybe we can add a WARN_ON() too to detect and confirm that this only
happens when we have a short copy case.
We will give this a try too at our end. Also the error handling pointed
by you and Ojaswin needs review & fixing in the next revision, to catch
any remaining paths where we may end up like this.
> --
> Pankaj
Thanks Pankaj for giving this a try at your setup.
-ritesh
next prev parent reply other threads:[~2026-04-22 7:23 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-08 18:45 [RFC PATCH v2 0/5] Add buffered write-through support to iomap & xfs Ojaswin Mujoo
2026-04-08 18:45 ` [RFC PATCH v2 1/5] mm: Refactor folio_clear_dirty_for_io() Ojaswin Mujoo
2026-04-15 6:14 ` Christoph Hellwig
2026-04-08 18:45 ` [RFC PATCH v2 2/5] iomap: Add initial support for buffered RWF_WRITETHROUGH Ojaswin Mujoo
2026-04-16 12:05 ` Jan Kara
2026-04-16 12:34 ` Jan Kara
2026-04-17 19:42 ` Ojaswin Mujoo
2026-04-20 11:28 ` Jan Kara
2026-04-21 18:07 ` Ojaswin Mujoo
2026-04-22 10:00 ` Jan Kara
2026-04-17 4:13 ` Pankaj Raghav (Samsung)
2026-04-18 7:33 ` Ojaswin Mujoo
2026-04-20 11:56 ` Pankaj Raghav (Samsung)
2026-04-21 18:15 ` Ojaswin Mujoo
2026-04-22 6:19 ` Pankaj Raghav
2026-04-22 6:40 ` Ritesh Harjani [this message]
2026-04-08 18:45 ` [RFC PATCH v2 3/5] xfs: Add RWF_WRITETHROUGH support to xfs Ojaswin Mujoo
2026-04-08 18:45 ` [RFC PATCH v2 4/5] iomap: Add aio support to RWF_WRITETHROUGH Ojaswin Mujoo
2026-04-08 18:45 ` [RFC PATCH v2 5/5] iomap: Add DSYNC support to writethrough Ojaswin Mujoo
2026-04-17 3:54 ` [RFC PATCH v2 0/5] Add buffered write-through support to iomap & xfs Pankaj Raghav (Samsung)
2026-04-18 7:26 ` Ojaswin Mujoo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5x5jts4r.ritesh.list@gmail.com \
--to=ritesh.list@gmail.com \
--cc=andres@anarazel.de \
--cc=brauner@kernel.org \
--cc=dgc@kernel.org \
--cc=djwong@kernel.org \
--cc=hch@lst.de \
--cc=jack@suse.cz \
--cc=john.g.garry@oracle.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=mcgrof@kernel.org \
--cc=ojaswin@linux.ibm.com \
--cc=p.raghav@samsung.com \
--cc=pankaj.raghav@linux.dev \
--cc=tytso@mit.edu \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox