From: Miao Xie <miaox@cn.fujitsu.com>
To: dsterba@suse.cz, Chris Mason <chris.mason@fusionio.com>,
"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
"alex.btrfs@zadarastorage.com" <alex.btrfs@zadarastorage.com>
Subject: Re: [PATCH v3] btrfs: clean snapshots one by one
Date: Tue, 14 May 2013 14:32:45 +0800 [thread overview]
Message-ID: <5191DA8D.9060209@cn.fujitsu.com> (raw)
In-Reply-To: <20130507115449.GE16456@twin.jikos.cz>
On tue, 7 May 2013 13:54:49 +0200, David Sterba wrote:
> On Mon, May 06, 2013 at 08:41:06PM -0400, Chris Mason wrote:
>>> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
>>> index 988b860..4de2351 100644
>>> --- a/fs/btrfs/disk-io.c
>>> +++ b/fs/btrfs/disk-io.c
>>> @@ -1690,15 +1690,19 @@ static int cleaner_kthread(void *arg)
>>> struct btrfs_root *root = arg;
>>>
>>> do {
>>> + int again = 0;
>>> +
>>> if (!(root->fs_info->sb->s_flags & MS_RDONLY) &&
>>> + down_read_trylock(&root->fs_info->sb->s_umount) &&
>>> mutex_trylock(&root->fs_info->cleaner_mutex)) {
>>> btrfs_run_delayed_iputs(root);
>>> - btrfs_clean_old_snapshots(root);
>>> + again = btrfs_clean_one_deleted_snapshot(root);
>>> mutex_unlock(&root->fs_info->cleaner_mutex);
>>> btrfs_run_defrag_inodes(root->fs_info);
>>> + up_read(&root->fs_info->sb->s_umount);
>>
>> Can we use just the cleaner mutex for this? We're deadlocking during
>> 068 with autodefrag on because the cleaner is holding s_umount while
>> autodefrag is trying to bump the writer count.
>
> I have now reproduced the deadlock and see where it's stuck. It did not
> happen with running 068 in a loop, but after interrupting the test.
>
>> If unmount takes the cleaner mutex once it should wait long enough for
>> the cleaner to stop.
>
> You mean removing s_umount from here completely? I'm not sure about
> other mis-interaction, eg with remount + autodefrag. Miao sent a patch
> for that case http://www.spinics.net/lists/linux-btrfs/msg16634.html
> (but it would not fix this deadlock).
I have given up this patch and fix this problem by the other way.
http://marc.info/?l=linux-btrfs&m=136142833013628&w=2
I think we need use s_umount here, all things we need do is to check R/O
in cleaner_mutex. Or we may continue to delete the dead tree after the fs
is remounted to be R/O.
Thanks
Miao
>
> I'm for keeping the clean-by-one patch for 3.10, we can fix other
> regressions during rc cycle.
>
> david
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
next prev parent reply other threads:[~2013-05-14 6:31 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-12 15:13 [PATCH v3] btrfs: clean snapshots one by one David Sterba
2013-03-16 19:34 ` Alex Lyakas
2013-05-07 0:41 ` Chris Mason
2013-05-07 11:54 ` David Sterba
2013-05-10 13:04 ` Chris Mason
2013-05-14 6:32 ` Miao Xie [this message]
2013-07-04 15:29 ` Alex Lyakas
2013-07-04 17:03 ` David Sterba
2013-07-04 19:52 ` Alex Lyakas
2013-07-05 2:21 ` Josef Bacik
2013-07-14 16:20 ` Alex Lyakas
2013-07-15 16:41 ` Josef Bacik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5191DA8D.9060209@cn.fujitsu.com \
--to=miaox@cn.fujitsu.com \
--cc=alex.btrfs@zadarastorage.com \
--cc=chris.mason@fusionio.com \
--cc=dsterba@suse.cz \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).