linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marc Haber <mh+linux-btrfs@zugschlus.de>
To: Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: New file system with same issue (was: Again, no space left on device while rebalancing and recipe doesnt work)
Date: Sun, 13 Mar 2016 17:56:38 +0100	[thread overview]
Message-ID: <20160313165638.GS2334@torres.zugschlus.de> (raw)
In-Reply-To: <CALxwgb5oMZWT3v7vLvs_nqZ+eNzUns78yW51mj+N7zbsMf1_MQ@mail.gmail.com>

On Mon, Mar 14, 2016 at 12:17:24AM +1100, Andrew Vaughan wrote:
> On 13 March 2016 at 22:58, Marc Haber <mh+linux-btrfs@zugschlus.de> wrote:
> > Hi,
> >
> > On Sat, Mar 05, 2016 at 12:34:09PM -0700, Chris Murphy wrote:
> >> The alternative if this can't be fixed, is to recreate the filesystem
> >> because there's no practical way yet to migrate so many snapshots to a
> >> new file system.
> >
> > I recreated the file system on March 7, with 200 GiB in size, using
> > btrfs-tools 4.4. The snapshot-taking process has been running since
> > then, but I also regularly cleaned up. The number of snapshots on the
> > new filesystem has never exceeded 1000, with the current count being
> > at 148.
> >
> <snip>
> 
> 
> I'm not a dev, so I'll just thouw out a random, and possibly naive idea.
> 
> How much i/o load is this filesystem under?
> What type of access pattern(s), how frequent and large are the changes?

Nearly none. It's a workstation which I have avoided using in the last
days due to the filesystem trouble and to avoid impact of local work
to the filesystem behavior. I even log out after working on the box
for a few minutes.

There is a Debian apt-cacher running on the box and writing its cache
to this btrfs, but /var is on its own subvolume that is only
snapshotted once a day. I'll move /var/cache to its own subvolume and
set this subvolume on a "no snapshots" schedule.

The box itself is running a couple of KVM VMs, but the virtual disks
of the VMs are on dedicated LVs.

> Are you still making snapshots every 10m?

I am snapshotting the subvolume /home/mh, with the obvious contents,
every ten minutes, yes. Most of the other subvolumes is snapshotted
once daily, with some of them not getting snapshotted at all.

> How often do you delete old snapshots?  Also every 10m, or do you
> delete them in batchs every hour or so?

I delete them in batches about every ohter day.

> How long does "btrfs subvolume delete -c <one_snapshot>" take?
> What does "time btrfs subvolume delete -C <one_snapshot> ;

[4/504]mh@fan:~$ time sudo btrfs subvolume delete -c /mnt/snapshots/fanbtr/user/subdaily/2016/03/13/07/5001/-home-mh
Delete subvolume (commit): '/mnt/snapshots/fanbtr/user/subdaily/2016/03/13/07/5001/-home-mh'

real    0m0.100s
user    0m0.000s
sys     0m0.016s
[5/505]mh@fan:~$ time sudo btrfs subvolume delete -C /mnt/snapshots/fanbtr/user/subdaily/2016/03/13/07/4001/-home-mh
Delete subvolume (commit): '/mnt/snapshots/fanbtr/user/subdaily/2016/03/13/07/4001/-home-mh'

real    0m0.079s
user    0m0.012s
sys     0m0.000s
[6/506]mh@fan:~$

The difference between -c and -C does only show when there is more
than one snapshot to be deleted.

>  time btrfs subvolume sync <mount_point>" print ?

[8/508]mh@fan:~$ time sudo btrfs subvolume sync /

real    0m0.030s
user    0m0.004s
sys     0m0.008s
[9/509]mh@fan:~$

> The reason for asking is that even on a lightly loaded filesystem I
> have seen btrfs subvolume delete take more than 30 seconds.  On a more
> heavily load filesystem  I have seen 5+ minutes before btrfs subvolume
> delete had finished.

In my experience, deleting snapshot in huge batches slows down quite a
bit, but this btrfs does not suffer from this disease.

> If you have a high enough i/o load, plus large enough changes per
> snapshot, it might be possible to get btrfs into a situation were it
> never actually finishes cleaning up deleted snapshots.  (I'm also not
> sure what happens if you shutdown or unmount whilst btrfs is still
> cleaning up, but I expect the devs thought of that).

It is a COW filesystem, I'd expect it to be consistent no matter what.
But that's the theory.

Greetings
Marc

-- 
-----------------------------------------------------------------------------
Marc Haber         | "I don't trust Computers. They | Mailadresse im Header
Leimen, Germany    |  lose things."    Winona Ryder | Fon: *49 6224 1600402
Nordisch by Nature |  How to make an American Quilt | Fax: *49 6224 1600421

  reply	other threads:[~2016-03-13 16:56 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-27 21:14 Again, no space left on device while rebalancing and recipe doesnt work Marc Haber
2016-02-27 23:15 ` Martin Steigerwald
2016-02-28  0:08   ` Marc Haber
2016-02-28  0:22     ` Hugo Mills
2016-02-28  8:40       ` Marc Haber
2016-02-29  1:56 ` Qu Wenruo
2016-02-29 15:33   ` Marc Haber
2016-03-01  0:45     ` Qu Wenruo
     [not found]       ` <20160301065448.GJ2334@torres.zugschlus.de>
2016-03-01  7:24         ` Qu Wenruo
2016-03-01  8:13           ` Qu Wenruo
     [not found]             ` <20160301161659.GR2334@torres.zugschlus.de>
2016-03-03  2:02               ` Qu Wenruo
2016-03-01 20:51           ` Duncan
2016-03-05 14:28             ` Marc Haber
2016-03-03  0:28 ` Dāvis Mosāns
2016-03-03  3:42   ` Qu Wenruo
2016-03-03  4:57   ` Duncan
2016-03-03 15:39     ` Dāvis Mosāns
2016-03-04 12:31       ` Duncan
2016-03-04 12:35         ` Hugo Mills
2016-03-27 12:10         ` Martin Steigerwald
2016-03-27 23:12           ` Duncan
2016-03-05 14:39   ` Marc Haber
2016-03-05 19:34     ` Chris Murphy
2016-03-05 20:09       ` Marc Haber
2016-03-06  6:43         ` Duncan
2016-03-06 20:27           ` Chris Murphy
2016-03-06 20:37             ` Chris Murphy
2016-03-07  8:47               ` Marc Haber
2016-03-07  8:42             ` Marc Haber
2016-03-07 18:39               ` Chris Murphy
2016-03-07 18:56                 ` Austin S. Hemmelgarn
2016-03-07 19:07                   ` Chris Murphy
2016-03-07 19:33                   ` Marc Haber
2016-03-12 21:36                 ` Marc Haber
2016-03-07 19:44               ` Chris Murphy
2016-03-07 20:43                 ` Duncan
2016-03-07 22:44                   ` Chris Murphy
2016-03-12 21:30             ` Marc Haber
2016-03-07  8:30           ` Marc Haber
2016-03-07 20:07             ` Duncan
2016-03-07  8:56         ` Marc Haber
2016-03-12 19:57       ` Marc Haber
2016-03-13 19:43         ` Chris Murphy
2016-03-13 20:50           ` Marc Haber
2016-03-13 21:31             ` Chris Murphy
2016-03-12 21:14       ` Marc Haber
2016-03-13 11:58       ` New file system with same issue (was: Again, no space left on device while rebalancing and recipe doesnt work) Marc Haber
2016-03-13 13:17         ` Andrew Vaughan
2016-03-13 16:56           ` Marc Haber [this message]
2016-03-13 17:12         ` Duncan
2016-03-13 21:05           ` Marc Haber
2016-03-14  1:05             ` Duncan
2016-03-14 11:49               ` Marc Haber
2016-03-13 19:14         ` Henk Slager
2016-03-13 19:42           ` Henk Slager
2016-03-13 20:56           ` Marc Haber
2016-03-14  0:00             ` Henk Slager
2016-03-15  7:20               ` Marc Haber
2016-03-14 12:07         ` Marc Haber
2016-03-14 12:48           ` New file system with same issue Holger Hoffstätte
2016-03-14 20:13             ` Marc Haber
2016-03-15 10:52               ` Holger Hoffstätte
2016-03-15 13:46                 ` Marc Haber
2016-03-15 13:54                   ` Austin S. Hemmelgarn
2016-03-15 14:09                     ` Marc Haber
2016-03-17  1:17               ` A good "Boot Maintenance" scheme (WAS: New file system with same issue) Robert White
2016-03-14 13:46           ` New file system with same issue (was: Again, no space left on device while rebalancing and recipe doesnt work) Henk Slager
2016-03-14 20:05             ` Marc Haber
2016-03-14 20:39               ` Henk Slager
2016-03-14 21:59                 ` Chris Murphy
2016-03-14 23:22                   ` Henk Slager
2016-03-15  7:16                     ` Marc Haber
2016-03-15 12:15                       ` Henk Slager
2016-03-15 13:24                         ` Marc Haber
2016-03-15  7:07                 ` Marc Haber
2016-03-27 12:15                   ` Martin Steigerwald
2016-03-15 13:29               ` Marc Haber
2016-03-15 13:42                 ` Marc Haber
2016-03-15 16:54                   ` Henk Slager
2016-03-27  8:41 ` Current state of old filesystem " Marc Haber
2016-04-01 13:59 ` Again, no space left on device while rebalancing and recipe doesnt work Marc Haber

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160313165638.GS2334@torres.zugschlus.de \
    --to=mh+linux-btrfs@zugschlus.de \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).