linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Goerzen <jgoerzen@complete.org>
To: linux-btrfs@vger.kernel.org
Subject: umount waiting for 12 hours and still running
Date: Tue, 05 Nov 2013 07:42:02 -0600	[thread overview]
Message-ID: <5278F5AA.60108@complete.org> (raw)

Hello,

More than 12 hours ago, I tried to umount a btrfs filesystem. Something
involving btrfs-cleaner and btrfs-transacti is still running, but I
don't know what.

I have noticed excessively long umount times before, and it is a
significant concern for me.

A bit of background:

The filesystem in question involves two 2TB USB hard drives.  It is 49%
full.  Data is RAID0, metadata is RAID1.  The files stored on it are for
BackupPC, meaning there are many, many directories and hardlinks.  I
would estimate 30 million inodes in use and many of them have dozens of
hardlinks to them.  These disks used to be formatted with ext4.  I used
the e2fs dump to back them up, created a fresh btrfs filesystem, and
used restore to load the data onto it.

Now then.  btrfs seemed to be extremely slow creating hard links. Slow
to the tune of taking hours longer than ext4 to do the same task, and
often triggering kernel task hung for more than 120 seconds warnings.  I
thought perhaps converting metadata to raid0 would help.  So I started a
btrfs balance start -mconver=raid0 on it.  According to btrfs fi df, it
churned through the first 900MB out of 26GB of metadata in quick order,
but then the amount of RAID0 metadata bounced up and down between about
950MB and 1019MB -- always just shy of 1GB.  There was an active rsync
job to the disk during this time.  With no apparent progress even after
hours, I tried to cancel the balance.  My cancel command did not return
even after waiting hours.  Finally I rebooted and mounted the FS with
the option to not restart the balance, then it canceled in a few
minutes.  dstat showed all was quiet on the disk.  So I thought I would
unmount it, remount it normally, and start the convert again.

And it is from that unmount that it has been sitting.  According to
dstat, it reads about 360K per second, every so often writing out about
25MB per second.  And it's been doing this for 12 hours.

It seems I have encountered numerous problems here:

   * I/O Starvation on link(2) and perhaps also unlink(2)
   * btrfs convert having a lack of progress after many hours
   * btrfs convert stop not stopping anything
   * umount taking hours

The umount is still pending, so if there is any debugging I can do,
please let me know.

Kernel 3.10 from Debian wheezy backports on i386.

Thanks,

John

             reply	other threads:[~2013-11-05 13:42 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-05 13:42 John Goerzen [this message]
2013-11-05 14:20 ` umount waiting for 12 hours and still running Duncan
2013-11-05 16:11   ` John Goerzen
2013-11-05 18:21     ` Duncan
  -- strict thread matches above, loose matches on Subject: below --
2013-11-05 18:46 Tomasz Chmielewski
2013-11-05 18:53 ` John Goerzen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5278F5AA.60108@complete.org \
    --to=jgoerzen@complete.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).