linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qu Wenruo <wqu@suse.com>
To: "Darrick J. Wong" <djwong@kernel.org>,
	Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: Matthew Wilcox <willy@infradead.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	linux-btrfs <linux-btrfs@vger.kernel.org>,
	Christian Brauner <brauner@kernel.org>,
	Christoph Hellwig <hch@infradead.org>,
	linux-bcachefs@vger.kernel.org
Subject: Re: Direct IO reads being split unexpected at page boundary, but in the middle of a fs block (bs > ps cases)
Date: Wed, 8 Oct 2025 07:58:01 +1030	[thread overview]
Message-ID: <8e3ee208-e0d1-4799-a70b-fd4e4de34bc5@suse.com> (raw)
In-Reply-To: <20251007145843.GP1587915@frogsfrogsfrogs>



在 2025/10/8 01:28, Darrick J. Wong 写道:
> On Tue, Oct 07, 2025 at 01:00:58PM +1030, Qu Wenruo wrote:
>>
>>
>> 在 2025/10/7 01:37, Matthew Wilcox 写道:
>>> On Wed, Oct 01, 2025 at 10:59:18AM +0930, Qu Wenruo wrote:
>>>> Recently during the btrfs bs > ps direct IO enablement, I'm hitting a case
>>>> where:
>>>>
>>>> - The direct IO iov is properly aligned to fs block size (8K, 2 pages)
>>>>     They do not need to be large folio backed, regular incontiguous pages
>>>>     are supported.
>>>>
>>>> - The btrfs now can handle sub-block pages
>>>>     But still require the bi_size and (bi_sector << 9) to be block size
>>>>     aligned.
>>>>
>>>> - The bio passed into iomap_dio_ops::submit_io is not block size
>>>>     aligned
>>>>     The bio only contains one page, not 2.
>>>
>>> That seems like a bug in the VFS/iomap somewhere.  Maybe try cc'ing the
>>> people who know this code?
>>>
>>
>> Add xfs and bcachefs subsystem into CC.
>>
>> The root cause is that, function __bio_iov_iter_get_pages() can split the
>> iov.
>>
>> In my case, I hit the following dio during iomap_dio_bio_iter();
>>
>>   fsstress-1153      6..... 68530us : iomap_dio_bio_iter: length=81920
>> nr_pages=20 enter
>>   fsstress-1153      6..... 68539us : iomap_dio_bio_iter: length=81920
>> realsize=69632(17 pages)
>>   fsstress-1153      6..... 68540us : iomap_dio_bio_iter: nr_pages=3 for next
>>
>> Which bio_iov_iter_get_pages() split the 20 pages into two segments (17 + 3
>> pages).
>> That 17/3 split is not meeting the btrfs' block size requirement (in my case
>> it's 8K block size).
> 
> Just out of curiosity, what are the corresponding
> iomap_iter_{src,dst}map tracepoints for these iomap_dio_bio_iters?

None, those are adhoc added trace_printk()s.

> 
> I'm assuming there's one mapping for all 80k of data?
> 
>> I'm seeing XFS having a comment related to bio_iov_iter_get_pages() inside
>> xfs_file_dio_write(), but there is no special checks other than
>> iov_iter_alignment() check, which btrfs is also doing.
>>
>> I guess since XFS do not need to bother data checksum thus such split is not
>> a big deal?
> 
> I think so too.  The bios all point to the original iomap_dio so the
> ioend only gets called once for the the full write IO, so a completion
> of an out of place write will never see sub-block ranges.
> 
>> On the other hand, bcachefs is doing reverting to the block boundary instead
>> thus solved the problem.
>> However btrfs is using iomap for direct IOs, thus we can not manually revert
>> the iov/bio just inside btrfs.
>>
>> So I guess in this case we need to add a callback for iomap, to get the fs
>> block size so that at least iomap_dio_bio_iter() can revert to the fs block
>> boundary?
> 
> Or add a flags bit to iomap_dio_ops to indicate that the fs requires
> block sized bios?

Yep, that's the next step.

> 
> I'm guessing that you can't do sub-block directio writes to btrfs
> either?

Exactly.

Thanks,
Qu

> 
> --D
> 
>> Thanks,
>> Qu
>>
> 


      reply	other threads:[~2025-10-07 21:28 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-01  1:29 Direct IO reads being split unexpected at page boundary, but in the middle of a fs block (bs > ps cases) Qu Wenruo
2025-10-06 15:07 ` Matthew Wilcox
2025-10-07  2:30   ` Qu Wenruo
2025-10-07 14:58     ` Darrick J. Wong
2025-10-07 21:28       ` Qu Wenruo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8e3ee208-e0d1-4799-a70b-fd4e4de34bc5@suse.com \
    --to=wqu@suse.com \
    --cc=brauner@kernel.org \
    --cc=djwong@kernel.org \
    --cc=hch@infradead.org \
    --cc=linux-bcachefs@vger.kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).