linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Zhao Lei <zhaolei@cn.fujitsu.com>
To: "'Chris Mason'" <clm@fb.com>, <linux-btrfs@vger.kernel.org>
Subject: RE: [PATCH 1/2] btrfs: reada: limit max works count
Date: Thu, 21 Jan 2016 11:36:49 +0800	[thread overview]
Message-ID: <009001d153fc$f629bed0$e27d3c70$@cn.fujitsu.com> (raw)
In-Reply-To: <20160120174823.ck5zeoihwsrbvoih@floor.thefacebook.com>



> -----Original Message-----
> From: Chris Mason [mailto:clm@fb.com]
> Sent: Thursday, January 21, 2016 1:48 AM
> To: Zhao Lei <zhaolei@cn.fujitsu.com>; linux-btrfs@vger.kernel.org
> Subject: Re: [PATCH 1/2] btrfs: reada: limit max works count
> 
> On Wed, Jan 20, 2016 at 10:16:27AM -0500, Chris Mason wrote:
> > On Tue, Jan 12, 2016 at 03:46:26PM +0800, Zhao Lei wrote:
> > > reada create 2 works for each level of tree in recursion.
> > >
> > > In case of a tree having many levels, the number of created works is
> > > 2^level_of_tree.
> > > Actually we don't need so many works in parallel, this patch limit
> > > max works to BTRFS_MAX_MIRRORS * 2.
> >
> > Hi,
> >
> > I don't think you end up calling atomic_dec() for every time that
> > reada_start_machine() is called.  Also, I'd rather not have a global
> > static variable to limit the parallel workers, when we have more than
> > one FS mounted it'll end up limiting things too much.
> >
> > With this patch applied, I'm seeing deadlocks during btrfs/066.    You
> > have to run the scrub tests as well, basically we're just getting
> > fsstress run alongside scrub.
> >
> > I'll run a few more times with it reverted to make sure, but I think
> > it's the root cause.
> 
> I spoke too soon, it ended up deadlocking a few tests later.  Sorry for now I'm
> pulling all the reada patches.  We'll sort out bug fixes vs cleanups in later rcs.
> 
> With all of the reada patches removed, the deadlocks are gone.
> 
Sorry for hear it.

Actually I run xfstests with all patch applied, and see no regression in my env:

FSTYP         -- btrfs
PLATFORM      -- Linux/x86_64 lenovo 4.4.0-rc6_HEAD_8e16378041f7f3531c256fd3e17a36a4fca92d29_+
MKFS_OPTIONS  -- /dev/sdb6
MOUNT_OPTIONS -- /dev/sdb6 /var/ltf/tester/scratch_mnt

btrfs/066 151s ... 164s
Ran: btrfs/066
Passed all 1 tests

I'll investigate the root reason.

Thanks
Zhaolei

> -chris





  reply	other threads:[~2016-01-21  3:37 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-12  7:46 [PATCH 1/2] btrfs: reada: limit max works count Zhao Lei
2016-01-12  7:46 ` [PATCH 2/2] btrfs: reada: simplify dev->reada_in_flight processing Zhao Lei
2016-01-20 15:16 ` [PATCH 1/2] btrfs: reada: limit max works count Chris Mason
2016-01-20 17:48   ` Chris Mason
2016-01-21  3:36     ` Zhao Lei [this message]
2016-01-21 10:06     ` Zhao Lei
2016-01-21 14:14       ` Chris Mason
2016-01-22 12:25         ` Zhao Lei
2016-01-22 14:19           ` Chris Mason
2016-01-26  9:08             ` Zhao Lei
2016-01-28  7:49             ` Zhao Lei
2016-01-28 13:30               ` Chris Mason

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='009001d153fc$f629bed0$e27d3c70$@cn.fujitsu.com' \
    --to=zhaolei@cn.fujitsu.com \
    --cc=clm@fb.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).