From: Qu Wenruo <wqu@suse.com>
To: Christoph Hellwig <hch@infradead.org>,
Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH v2 0/4] btrfs: introduce btrfs specific bdev holder ops and implement mark_dead() call back
Date: Mon, 9 Jun 2025 15:57:27 +0930 [thread overview]
Message-ID: <7352872f-968a-43b5-a2db-2d329424896d@suse.com> (raw)
In-Reply-To: <aEZzTyBsj7x-4g5l@infradead.org>
在 2025/6/9 15:08, Christoph Hellwig 写道:
> On Mon, Jun 09, 2025 at 03:01:32PM +0930, Qu Wenruo wrote:
>>
>>
>> 在 2025/6/9 14:51, Christoph Hellwig 写道:
>>> No full reivew yet, but I think in the long run your maintainance
>>> burdern will be a lot lower if you implement my suggestion of using
>>> the generic code and adding a new devloss super_uperation.
>>
>> The main problem here is, we didn't go through setup_bdev_super() at all,
>> and the super_block structure itself only supports one bdev.
>>
>> Thus even if we implement a devloss call back in super ops, it will still
>> require quite some extra works to make btrfs to go through the
>> setup_bdev_super().
>
> Why do you need setup_bdev_super? Everything relevant is already
> open coded in btrfs, you'll just need to use fs_holder_ops and ensure
> the sb is stored as holder in every block device.
>
> The other nice thing is that you can also stage the changes, i.e.
> first resurrect the old holder cleanups, then support ->shutdown,
> then add the new ->devloss callback to not shut down the entire file
> system if there is enough redundancy.
>
>> Although I have to admit, if all btrfs bdevs go through fs_holder_ops, it
>> indeed solves a lot of extra races more easily (freeze ioctl vs bdev freeze
>> call back races).
>>
>>>
>>> This might require resurrecting my old holder cleanup that Johannes
>>> reposted about a year ago.
>>>
>> Maybe it's time to revive that series, mind to share the link to that
>> series?
>
> My original posting:
>
> https://lore.kernel.org/linux-btrfs/b083ae24-2273-479f-8c9e-96cb9ef083b8@wdc.com/
>
> Rebase from Johannes:
>
> https://lore.kernel.org/linux-btrfs/20240214-hch-device-open-v1-0-b153428b4f72@wdc.com/
>
Thanks a lot, I'll give that series a review and rebase.
It will be great if we do not need to introduce any extra per-device
specific freeze/thaw serialization inside btrfs.
Thanks,
Qu
prev parent reply other threads:[~2025-06-09 6:27 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-09 5:19 [PATCH v2 0/4] btrfs: introduce btrfs specific bdev holder ops and implement mark_dead() call back Qu Wenruo
2025-06-09 5:19 ` [PATCH v2 1/4] btrfs: use fs_info as the block device holder Qu Wenruo
2025-06-09 5:19 ` [PATCH v2 2/4] btrfs: replace fput() with bdev_fput() for block devices Qu Wenruo
2025-06-09 5:19 ` [PATCH v2 3/4] btrfs: implement a basic per-block-device call backs Qu Wenruo
2025-06-09 5:19 ` [PATCH v2 4/4] btrfs: add a simple dead device detection mechanism Qu Wenruo
2025-06-09 5:21 ` [PATCH v2 0/4] btrfs: introduce btrfs specific bdev holder ops and implement mark_dead() call back Christoph Hellwig
2025-06-09 5:31 ` Qu Wenruo
2025-06-09 5:38 ` Christoph Hellwig
2025-06-09 6:27 ` Qu Wenruo [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7352872f-968a-43b5-a2db-2d329424896d@suse.com \
--to=wqu@suse.com \
--cc=hch@infradead.org \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox