linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: "linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	linux-btrfs <linux-btrfs@vger.kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Catherine Hoang <catherine.hoang@oracle.com>
Subject: Re: Why a lot of fses are using bdev's page cache to do super block read/write?
Date: Thu, 10 Jul 2025 06:10:04 +0930	[thread overview]
Message-ID: <02bf24f8-c7f1-4f70-8af0-73b9656c00b6@gmx.com> (raw)
In-Reply-To: <20250709150436.GG2672029@frogsfrogsfrogs>



在 2025/7/10 00:34, Darrick J. Wong 写道:
> On Wed, Jul 09, 2025 at 06:35:00PM +0930, Qu Wenruo wrote:
>> Hi,
>>
>> Recently I'm trying to remove direct bdev's page cache usage from btrfs
>> super block IOs.
>>
>> And replace it with common bio interface (mostly with bdev_rw_virt()).
>>
>> However I'm hitting random generic/492 failure where sometimes blkid failed
>> to detect any useful super block signature of btrfs.
> 
> Yes, you need to invalidate_bdev() after writing the superblock directly
> to disk via submit_bio.

Since invalidate_bdev() is invaliding the whole page cache of the bdev, 
it may increase the latency of super block writeback, which may bring 
unexpected performance change.

All we want is only to ensure the content of folio where our sb is,
so it looks like we're better sticking with the existing bdev page cache 
usage.
Although the btrfs' super block writeback is still doing something out 
of normal, and will be properly addressed.

Thanks Matthew and Darrick for this detailed explanation,
Qu

> 
>> This leads more digging, and to my surprise using bdev's page cache to do
>> superblock IOs is not an exception, in fact f2fs is doing exactly the same
>> thing.
>>
>>
>> This makes me wonder:
>>
>> - Should a fs use bdev's page cache directly?
>>    I thought a fs shouldn't do this, and bio interface should be
>>    enough for most if not all cases.
>>
>>    Or am I wrong in the first place?
> 
> As willy said, most filesystems use the bdev pagecache because then they
> don't have to implement their own (metadata) buffer cache.  The downside
> is that any filesystem that does so must be prepared to handle the
> buffer_head contents changing any time they cycle the bh lock because
> anyone can write to the block device of a mounted fs ala tune2fs.
> 
> Effectively this means that you have to (a) revalidate the entire buffer
> contents every time you lock_buffer(); and (b) you can't make decisions
> based on superblock feature bits in the superblock bh directly.
> 
> I made that mistake when adding metadata_csum support to ext4 -- we'd
> only connect to the crc32c "crypto" module if checksums were enabled in
> the ondisk super at mount time, but then there were a couple of places
> that looked at the ondisk super bits at runtime, so you could flip the
> bit on and crash the kernel almost immediately.
> 
> Nowadays you could protect against malicious writes with the
> BLK_DEV_WRITE_MOUNTED=n so at least that's mitigated a little bit.
> Note (a) implies that the use of BH_Verified is a giant footgun.
> 
> Catherine Hoang [now cc'd] has prototyped a generic buffer cache so that
> we can fix these vulnerabilities in ext2:
> https://lore.kernel.org/linux-ext4/20250326014928.61507-1-catherine.hoang@oracle.com/
> 
>> - What is keeping fs super block update from racing with user space
>>    device scan?
>>
>>    I guess it's the regular page/folio locking of the bdev page cache.
>>    But that also means, pure bio based IO will always race with buffered
>>    read of a block device.
> 
> Right.  In theory you could take the posix advisory lock (aka flock)
> from inside the kernel for the duration of the sb write, and that would
> prevent libblkid/udev from seeing torn/stale contents because they take
> LOCK_SH.
> 
>> - If so, is there any special bio flag to prevent such race?
>>    So far I am unable to find out such flag.
> 
> No.
> 
> --D
> 
>> Thanks,
>> Qu
>>
> 


      reply	other threads:[~2025-07-09 20:40 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-09  9:05 Why a lot of fses are using bdev's page cache to do super block read/write? Qu Wenruo
2025-07-09 12:01 ` Matthew Wilcox
2025-07-09 15:04 ` Darrick J. Wong
2025-07-09 20:40   ` Qu Wenruo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=02bf24f8-c7f1-4f70-8af0-73b9656c00b6@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=catherine.hoang@oracle.com \
    --cc=djwong@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).