linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Steven Post <redalert.commander@gmail.com>
To: linux-btrfs@vger.kernel.org
Subject: Device delete returns "unable to go below four devices on raid10" on 5 drive setup
Date: Sat, 31 Aug 2013 12:12:55 +0200	[thread overview]
Message-ID: <1377943975.5426.17.camel@pc-steven.LAN> (raw)

[-- Attachment #1: Type: text/plain, Size: 2197 bytes --]

Hello list,

I have a 5 drive raid10 setup (6th sata port malfunctions, all drives
are 3TB in size).
I want to remove a single drive, yet the 'btrfs device delete' command
gives me the "unable to go below four devices on raid10" error.

This is the result after first deleting a device, after a check, it
didn't seem to be removed, then issuing the command again results in the
error.

Before:
# btrfs filesystem show /dev/sda3
Label: 'maindrivearray'  uuid: f58976ab-2ce1-4a1c-bc82-22df7d3393b4
	Total devices 5 FS bytes used 2.67TB
	devid    4 size 2.73TB used 1.09TB path /dev/sde3
	devid    3 size 2.73TB used 1.09TB path /dev/sdd3
	devid    2 size 2.73TB used 1.09TB path /dev/sdc3
	devid    6 size 2.73TB used 1.09TB path /dev/sdb3
	devid    5 size 2.73TB used 1.09TB path /dev/sda3

Btrfs Btrfs v0.19

After issuing the command
# btrfs device delete /dev/sde3 /mnt

I get this:
# btrfs filesystem show /dev/sda3
Label: 'maindrivearray'  uuid: f58976ab-2ce1-4a1c-bc82-22df7d3393b4
	Total devices 5 FS bytes used 2.67TB
	devid    4 size 2.73TB used 1.09TB path /dev/sde3
	devid    3 size 2.73TB used 1.24TB path /dev/sdd3
	devid    2 size 2.73TB used 1.24TB path /dev/sdc3
	devid    6 size 2.73TB used 1.24TB path /dev/sdb3
	devid    5 size 2.73TB used 1.24TB path /dev/sda3

Btrfs Btrfs v0.19

When issuing the delete command again, the error pops up, also after
reboot. The first remove did take a long time to complete and according
to syslog and the 'filesystem show' command a lot of data was moved to
the other drives (as expected).

The system is running Debian Wheezy (kernel 3.2.0-4-amd64 #1 SMP Debian
3.2.46-1 x86_64).

Is this something known (and possibly resolved in a later version), or
should I open a bug report about it? Could it be that the device removal
was completed, but still shows as part of the array for some reason?

The reason for the remove is actually that I want to (gradually) replace
the 3TB drives with 1 TB ones, and somewhere in the middle move some of
the data of the array, to another machine, that currently has the 1 TB
drives which I intend to replace with the 3TB ones.

Best regards,
Steven

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 490 bytes --]

             reply	other threads:[~2013-08-31 10:13 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-31 10:12 Steven Post [this message]
2013-08-31 11:41 ` Device delete returns "unable to go below four devices on raid10" on 5 drive setup Duncan
2013-08-31 17:42 ` Chris Murphy
2013-08-31 22:20   ` Hugo Mills
2013-08-31 23:55   ` Steven Post
2013-09-01  0:03     ` Chris Murphy
2013-09-01 12:08       ` Steven Post
2013-09-01 21:43         ` Steven Post

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1377943975.5426.17.camel@pc-steven.LAN \
    --to=redalert.commander@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).