linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Josef Bacik <josef@redhat.com>
To: Josh Berry <des@condordes.net>
Cc: Josef Bacik <josef@redhat.com>, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH] Btrfs: dynamically remove unused block groups
Date: Tue, 30 Nov 2010 14:01:01 -0500	[thread overview]
Message-ID: <20101130190100.GA2577@localhost.localdomain> (raw)
In-Reply-To: <AANLkTikkU_2SbtVzRuUJbTGO3dQGQc9Jd8FgpGFPF7uE@mail.gmail.com>

On Tue, Nov 30, 2010 at 09:37:17AM -0800, Josh Berry wrote:
> On Tue, Nov 30, 2010 at 08:46, Josef Bacik <josef@redhat.com> wrote:
> > Btrfs only allocates chunks as we need them, however we do not dele=
te chunks as
> > we stop using them. =A0This patch adds this capability. =A0Whenever=
 we clear the
> > last bit of used space in a block group we try and mark it read onl=
y, and then
> > when the last pinned space is finally removed we queue up the delet=
ion work.
> > I've tested this with xfstests and my enospc tests. =A0When filling=
 up the disk
> > I see that we've allocated the entire disk of chunks, and then when=
 I do rm *
> > there is a bunch of space freed up. =A0Thanks,
>=20
> Stupid user question:
>=20
> I have a btrfs filesystem on a 2.6.36 kernel that used to have ~800GB
> of data on it.  Then I deleted ~500GB of it (moved it elsewhere), but
> my space usage as reported by df and the btrfs tool didn't decrease
> appreciably.  Might this be why?
>=20

So without this patch, with a full fs I do this

[root@test1244 ~]# ./btrfs-progs-unstable/btrfs fi df /mnt/btrfs-test/
Data: total=3D980.25MB, used=3D909.91MB
System, DUP: total=3D16.00MB, used=3D4.00KB
System: total=3D4.00MB, used=3D0.00
Metadata, DUP: total=3D511.88MB, used=3D190.42MB
Metadata: total=3D8.00MB, used=3D0.00

If I removed everything from the fs, you'd still see Data total=3D980.2=
5MB, but
used should be close to 0 (this is assuming no snapshots and such).  Wi=
th this
patch if I rm -rf /mnt/btrfs/* I get this

[root@test1244 ~]# ./btrfs-progs-unstable/btrfs fi df /mnt/btrfs-test/
Data: total=3D204.75MB, used=3D192.00KB
System, DUP: total=3D16.00MB, used=3D4.00KB
System: total=3D4.00MB, used=3D0.00
Metadata, DUP: total=3D307.12MB, used=3D24.00KB
Metadata: total=3D8.00MB, used=3D0.00

So that free'd up ~700mb in data space and ~200mb in metadata space tha=
t can be
allocated to either data/metadata based on your usage patterns.  I hope=
 that
helps explain it.  Thanks,

Josef
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2010-11-30 19:01 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-11-30 16:46 [PATCH] Btrfs: dynamically remove unused block groups Josef Bacik
2010-11-30 17:37 ` Josh Berry
2010-11-30 19:01   ` Josef Bacik [this message]
2010-11-30 19:31     ` Josh Berry
2010-11-30 19:35       ` Josef Bacik
2010-12-01  4:53         ` Anthony Roberts
2010-12-01  8:11           ` Josef Bacik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101130190100.GA2577@localhost.localdomain \
    --to=josef@redhat.com \
    --cc=des@condordes.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).