From: Josef Bacik <jbacik@fusionio.com>
To: Mitch Harder <mitch.harder@sabayonlinux.org>
Cc: Josef Bacik <JBacik@fusionio.com>,
"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>
Subject: Re: [PATCH] Btrfs: do not allocate chunks as agressively
Date: Fri, 17 Aug 2012 14:28:46 -0400 [thread overview]
Message-ID: <20120817182846.GD2133@localhost.localdomain> (raw)
In-Reply-To: <CAKcLGm9rra1v8t5rRB2DAw0YM0QC+JHMTZK=uzMOXOv9hAZfMw@mail.gmail.com>
On Wed, Aug 15, 2012 at 11:29:11AM -0600, Mitch Harder wrote:
> On Tue, Aug 14, 2012 at 3:22 PM, Josef Bacik <jbacik@fusionio.com> wrote:
> > Swinging this pendulum back the other way. We've been allocating chunks up
> > to 2% of the disk no matter how much we actually have allocated. So instead
> > fix this calculation to only allocate chunks if we have more than 80% of the
> > space available allocated. Please test this as it will likely cause all
> > sorts of ENOSPC problems to pop up suddenly. Thanks,
> >
> > Signed-off-by: Josef Bacik <jbacik@fusionio.com>
>
> I've been testing this patch with my multiple rsync test (On a 3.5.1
> kernel merged with for-linus).
>
> I tested without compression, and with lzo compression, and I haven't
> run into any ENOSPC issues. I still have ENOSPC issues with zlib,
> with or without this patch.
>
> I made a series of runs with and without this patch (on an
> uncompressed, newly formatted partition), and some of the results were
> not what I anticipated.
>
> 1) I found that *MORE* metadata space was being allocated with this
> patch than when using an unpatched baseline kernel. The total
> allocated space was exactly the same in each run (I saw a slight
> variation in the amount of used Metadata).
>
> On the unpatched baseline kernel, at the end of the run, the 'btrfs fi
> df' command would show:
>
> # btrfs fi df /mnt/benchmark/
> Data: total=10.01GB, used=6.99GB
> System: total=4.00MB, used=4.00KB
> Metadata: total=776.00MB, used=481.38MB
>
> With this patch applied, the 'btrfs fi df' command would show:
>
> # btrfs fi df /mnt/benchmark/
> Data: total=10.01GB, used=6.99GB
> System: total=4.00MB, used=4.00KB
> Metadata: total=1.01GB, used=480.94MB
>
>
> 2) The multiple rsync's would run significantly faster with the patched kernel.
>
> Unpatched baseline kernel: Time to run 7 rysncs: 348.3 sec (+/- 9.7 sec)
> Patched kernel: Time to run 7 rsyncs: 316.6 sec (+/- 6.5 sec)
>
> Perhaps the extra allocated metadata space made things run better, or
> perhaps something else was going on.
Well that's odd, I wonder if we're doing the limited dance more often. Once
I've finished my fsync work I'll come back to this. I know for sure in my
tests it's allocating chunks way too often, so I imagine your test is just
tickling a different aspect of the chunk allocator. Thanks,
Josef
prev parent reply other threads:[~2012-08-17 18:28 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-08-14 20:22 [PATCH] Btrfs: do not allocate chunks as agressively Josef Bacik
2012-08-15 17:29 ` Mitch Harder
2012-08-17 18:28 ` Josef Bacik [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120817182846.GD2133@localhost.localdomain \
--to=jbacik@fusionio.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=mitch.harder@sabayonlinux.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).