public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Troy Ablan <tablan@gmail.com>
To: "Yan, Zheng " <yanzheng@21cn.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: panic during rebalance, and now upon mount
Date: Sun, 31 Jan 2010 12:33:44 -0700	[thread overview]
Message-ID: <4B65DB18.60309@gmail.com> (raw)
In-Reply-To: <3d0408631001310434g35b1fa4cp8a42068ef5fa5a34@mail.gmail.com>

Yan, Zheng wrote:
> Please try the patch attached below. It should solve the bug during
> mounting
> that fs. But I don't know why there are so many link count errors in that fs.
> How old is that fs? what was that fs used for?
>
> Thank you very much.
> Yan, Zheng
>
>   
Good, so far.  Thanks!

The filesystem is less than 2 weeks old, created and managed exclusively
with the unstable tools Btrfs v0.19-4-gab8fb4c-dirty

I created the filesystem -d raid1 -m raid1.

There are 14 dm-crypt mappings corresponding to 14 partitions on 14
drives.  There's one filesystem made up from these devices with about 14
TB of space (a mixture of devices ranging from 500GB to 2TB)

The filesystem is used for incremental backup from remote computers
using rsync.

The filesystem tree is as follows

/
/machine1 <- normal directory
/machine1/machine1 <- a subvolume
/machine1/machine1-20100120-1220 <- a snapshot of the subvolume above
...
/machine1/machine1-20100131-1220 <- more snapshots of the subvolume above
/machine2 <- normal directory
/machine2/machine1 <- a subvolume
/machine2/machine2-20100120-1020 <- a snapshot of the subvolume above
...
/machine2/machine2-20100131-1020 <- more snapshots of the subvolume above
...

The files are backed up with `rsync -aH --inplace` onto the subvolume
for each machine.

The only oddness I can think of is that during initial testing of this
filesystem, I yanked a drive physically from the machine while it was
writing.  btrfs seemed to continue to try to write to the inaccessible
device, and indeed, btrfs-show showed the used space on the missing
drive increasing over time.  Also, I was unable to remove the drive from
the volume (ioctl returned -1), so it was in this state until I rebooted
a couple hours later.   I then did a btrfs-vol -r missing on the drive,
and then added it back in as a new device.  I did btrfs-vol -b which
succeeded once.   After adding more drives, I did btrfs-vol -b again,
and that left me in the state where this thread began.

As I understand it, a btrfs-vol -b is currently one of the only ways to
reduplicate unmirrored chunks after a drive failure. (aside from
rewriting the data or removing and readding devices).  Is my
understanding correct?

Thanks

-- 
Troy

  reply	other threads:[~2010-01-31 19:33 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-01-30  6:05 panic during rebalance, and now upon mount Troy Ablan
2010-01-30 12:09 ` Yan, Zheng 
2010-01-30 17:31   ` Troy Ablan
2010-01-31  2:00     ` Yan, Zheng 
2010-01-31 10:09       ` Troy Ablan
2010-01-31 12:34         ` Yan, Zheng 
2010-01-31 19:33           ` Troy Ablan [this message]
2010-02-01  4:22             ` Yan, Zheng 

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4B65DB18.60309@gmail.com \
    --to=tablan@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=yanzheng@21cn.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox