From: Qu Wenruo <quwenruo@cn.fujitsu.com>
To: Anand Jain <anand.jain@oracle.com>,
Qu Wenruo <quwenruo.btrfs@gmx.com>, <linux-btrfs@vger.kernel.org>
Subject: Re: [PATCH 2/2] btrfs: Remove unneeded missing device number check
Date: Fri, 18 Sep 2015 10:06:32 +0800 [thread overview]
Message-ID: <55FB71A8.3000004@cn.fujitsu.com> (raw)
In-Reply-To: <55FB6D31.9030504@oracle.com>
Anand Jain wrote on 2015/09/18 09:47 +0800:
>
>
> On 09/17/2015 06:01 PM, Qu Wenruo wrote:
>> Thanks for pointing this out.
>
>
>> Although previous patch is small enough, but for remount case, we need
>> to iterate all the existing chunk cache.
>
> yes indeed.
>
> thinking hard on this - is there any test-case that these two patches
> are solving, which the original patch [1] didn't solve ?
Yep, your patch is OK to fix single chunk on safe disk case.
But IMHO, it's a little aggressive and not safe as old codes.
For example, if one use single metadata for 2 disks, and each disk has
one metadata chunk on it.
One device got missing later.
Then your patch will allow the fs to be mounted as rw, even some tree
block can be in the missing device.
For RO case, it won't be too dangerous, but if we mounted it as RW, who
knows what will happen.
(Normal tree COW thing should fail before real write, but I'm not sure
about other RW operation like scrub/replace/balance and others)
And I think that's the original design concept behind the old missing
device number check, and it's not a bad idea to follow it anyway.
For the patch size, I find a good idea to handle it, and should make the
patch(set) size below 200 lines.
Further more, it's even possible to make btrfs change mount option to
degraded for runtime device missing.
Thanks,
Qu
>
> I tried to break both the approaches (this patch set and [1]) but I
> wasn't successful. sorry if I am missing something.
>
> Thanks, Anand
>
> [1] [PATCH 23/23] Btrfs: allow -o rw,degraded for single group profile
>
>
>> So fix for remount will take a little more time.
>
>> Thanks for reviewing.
>> Qu
>>
>> 在 2015年09月17日 17:43, Anand Jain 写道:
>>>
>>>
>>> On 09/16/2015 11:43 AM, Qu Wenruo wrote:
>>>> As we do per-chunk missing device number check at read_one_chunk()
>>>> time,
>>>> it's not needed to do global missing device number check.
>>>>
>>>> Just remove it.
>>>
>>> However the missing device count, what we have during the remount is not
>>> fine grained per chunk.
>>> -----------
>>> btrfs_remount
>>> ::
>>> if (fs_info->fs_devices->missing_devices >
>>> fs_info->num_tolerated_disk_barrier_failures &&
>>> !(*flags & MS_RDONLY ||
>>> btrfs_test_opt(root, DEGRADED))) {
>>> btrfs_warn(fs_info,
>>> "too many missing devices, writeable
>>> remount is not allowed");
>>> ret = -EACCES;
>>> goto restore;
>>> }
>>> ---------
>>>
>>> Thanks, Anand
>>>
>>>
>>>> Now btrfs can handle the following case:
>>>> # mkfs.btrfs -f -m raid1 -d single /dev/sdb /dev/sdc
>>>>
>>>> Data chunk will be located in sdb, so we should be safe to wipe sdc
>>>> # wipefs -a /dev/sdc
>>>>
>>>> # mount /dev/sdb /mnt/btrfs -o degraded
>>>>
>>>> Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
>>>> ---
>>>> fs/btrfs/disk-io.c | 8 --------
>>>> 1 file changed, 8 deletions(-)
>>>>
>>>> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
>>>> index 0b658d0..ac640ea 100644
>>>> --- a/fs/btrfs/disk-io.c
>>>> +++ b/fs/btrfs/disk-io.c
>>>> @@ -2947,14 +2947,6 @@ retry_root_backup:
>>>> }
>>>> fs_info->num_tolerated_disk_barrier_failures =
>>>> btrfs_calc_num_tolerated_disk_barrier_failures(fs_info);
>>>> - if (fs_info->fs_devices->missing_devices >
>>>> - fs_info->num_tolerated_disk_barrier_failures &&
>>>> - !(sb->s_flags & MS_RDONLY)) {
>>>> - pr_warn("BTRFS: missing devices(%llu) exceeds the limit(%d),
>>>> writeable mount is not allowed\n",
>>>> - fs_info->fs_devices->missing_devices,
>>>> - fs_info->num_tolerated_disk_barrier_failures);
>>>> - goto fail_sysfs;
>>>> - }
>>>>
>>>> fs_info->cleaner_kthread = kthread_run(cleaner_kthread,
>>>> tree_root,
>>>> "btrfs-cleaner");
>>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe
>>> linux-btrfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2015-09-18 2:06 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-16 3:43 [PATCH 1/2] btrfs: Do per-chunk degrade mode check at mount time Qu Wenruo
2015-09-16 3:43 ` [PATCH 2/2] btrfs: Remove unneeded missing device number check Qu Wenruo
2015-09-17 9:43 ` Anand Jain
2015-09-17 10:01 ` Qu Wenruo
2015-09-18 1:47 ` Anand Jain
2015-09-18 2:06 ` Qu Wenruo [this message]
2015-09-18 6:45 ` Anand Jain
2015-09-20 0:31 ` Qu Wenruo
2015-09-20 5:37 ` Anand Jain
2015-09-21 2:09 ` Qu Wenruo
2015-09-17 1:48 ` [PATCH 1/2] btrfs: Do per-chunk degrade mode check at mount time Qu Wenruo
2015-09-17 9:37 ` Anand Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55FB71A8.3000004@cn.fujitsu.com \
--to=quwenruo@cn.fujitsu.com \
--cc=anand.jain@oracle.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).