From: Chris Mason <clm@fb.com>
To: Liu Bo <bo.li.liu@oracle.com>
Cc: Qu Wenruo <quwenruo@cn.fujitsu.com>,
Martin Steigerwald <martin@lichtvoll.de>,
Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: Still not production ready
Date: Wed, 16 Dec 2015 09:27:27 -0500 [thread overview]
Message-ID: <20151216142727.GD6322@ret.masoncoding.com> (raw)
In-Reply-To: <20151216023057.GC11024@localhost.localdomain>
On Tue, Dec 15, 2015 at 06:30:58PM -0800, Liu Bo wrote:
> On Wed, Dec 16, 2015 at 10:19:00AM +0800, Qu Wenruo wrote:
> > >max_stripe_size is fixed at 1GB and the chunk size is stripe_size * data_stripes,
> > >may I know how your partition gets a 10GB chunk?
> >
> > Oh, it seems that I remembered the wrong size.
> > After checking the code, yes you're right.
> > A stripe won't be larger than 1G, so my assumption above is totally wrong.
> >
> > And the problem is not in the 10% limit.
> >
> > Please forget it.
>
> No problem, glad to see people talking about the space issue again.
You can still end up with larger block groups if you have a lot of
drives. We've had different problems with that in the past, but it is
limited now to 10G.
At any rate if things are still getting badly out of balance we need to
tweak the allocator some more.
It's hard to reproduce because you need a burst of allocations for
whatever type is full. I'll give it another shot.
-chris
next prev parent reply other threads:[~2015-12-16 14:28 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-13 22:35 Still not production ready Martin Steigerwald
2015-12-13 23:19 ` Marc MERLIN
2015-12-14 7:59 ` still kworker at 100% cpu in all of device size allocated with chunks situations with write load (was: Re: Still not production ready) Martin Steigerwald
2015-12-14 2:08 ` Still not production ready Qu Wenruo
2015-12-14 6:21 ` Duncan
2015-12-14 7:32 ` Qu Wenruo
2015-12-14 12:10 ` Duncan
2015-12-14 19:08 ` Chris Murphy
2015-12-14 20:33 ` Austin S. Hemmelgarn
2015-12-14 8:18 ` still kworker at 100% cpu in all of device size allocated with chunks situations with write load (was: Re: Still not production ready) Martin Steigerwald
2015-12-14 8:48 ` still kworker at 100% cpu in all of device size allocated with chunks situations with write load Qu Wenruo
2015-12-14 8:59 ` Martin Steigerwald
2015-12-14 9:10 ` safety of journal based fs (was: Re: still kworker at 100% cpu…) Martin Steigerwald
2015-12-22 2:34 ` Kai Krakow
2015-12-15 21:59 ` Still not production ready Chris Mason
2015-12-15 23:16 ` Martin Steigerwald
2015-12-16 1:20 ` Qu Wenruo
2015-12-16 1:53 ` Liu Bo
2015-12-16 2:19 ` Qu Wenruo
2015-12-16 2:30 ` Liu Bo
2015-12-16 14:27 ` Chris Mason [this message]
2016-01-01 10:44 ` still kworker at 100% cpu in all of device size allocated with chunks situations with write load Martin Steigerwald
2016-03-20 11:24 ` kworker threads may be working saner now instead of using 100% of a CPU core for minutes (Re: Still not production ready) Martin Steigerwald
2016-09-07 9:53 ` Christian Rohmann
2016-09-07 14:28 ` Martin Steigerwald
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151216142727.GD6322@ret.masoncoding.com \
--to=clm@fb.com \
--cc=bo.li.liu@oracle.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=martin@lichtvoll.de \
--cc=quwenruo@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).