* How to remove a device on a RAID-1 before replacing it?
@ 2011-03-29 20:09 Andrew Lutomirski
2011-03-29 20:21 ` cwillu
0 siblings, 1 reply; 5+ messages in thread
From: Andrew Lutomirski @ 2011-03-29 20:09 UTC (permalink / raw)
To: linux-btrfs
I have a disk with a SMART failure. It still works but I assume it'll
fail sooner or later.
I want to remove it from my btrfs volume, replace it, and add the new
one. But the obvious command doesn't work:
# btrfs device delete /dev/dm-5 /mnt/foo
ERROR: error removing the device '/dev/dm-5'
dmesg says:
btrfs: unable to go below two devices on raid1
With mdadm, I would fail the device, remove it, run degraded until I
get a new device, and hot-add that device.
With btrfs, I'd like some confirmation from the fs that data is
balanced appropriately so I won't get data loss if I just yank the
drive. And I don't even know how to tell btrfs to release the drive
so I can safely remove it.
(Mounting with -o degraded doesn't help. I could umount, remove the
disk, then remount, but that feels like a hack.)
This is 2.6.38.1 running Fedora 14's version of btrfs-progs, but
btrfs-progs-unstable git does the same thing, as does btrfs-vol -r.
--Andy
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: How to remove a device on a RAID-1 before replacing it?
2011-03-29 20:09 How to remove a device on a RAID-1 before replacing it? Andrew Lutomirski
@ 2011-03-29 20:21 ` cwillu
2011-03-29 20:45 ` Helmut Hullen
2011-03-29 21:01 ` Andrew Lutomirski
0 siblings, 2 replies; 5+ messages in thread
From: cwillu @ 2011-03-29 20:21 UTC (permalink / raw)
To: Andrew Lutomirski; +Cc: linux-btrfs
On Tue, Mar 29, 2011 at 2:09 PM, Andrew Lutomirski <luto@mit.edu> wrote=
:
> I have a disk with a SMART failure. =A0It still works but I assume it=
'll
> fail sooner or later.
>
> I want to remove it from my btrfs volume, replace it, and add the new
> one. =A0But the obvious command doesn't work:
>
> # btrfs device delete /dev/dm-5 /mnt/foo
> ERROR: error removing the device '/dev/dm-5'
>
> dmesg says:
> btrfs: unable to go below two devices on raid1
>
> With mdadm, I would fail the device, remove it, run degraded until I
> get a new device, and hot-add that device.
>
> With btrfs, I'd like some confirmation from the fs that data is
> balanced appropriately so I won't get data loss if I just yank the
> drive. =A0And I don't even know how to tell btrfs to release the driv=
e
> so I can safely remove it.
>
> (Mounting with -o degraded doesn't help. =A0I could umount, remove th=
e
> disk, then remount, but that feels like a hack.)
There's no "nice" way to remove a failing disk in btrfs right now
("btrfs dev delete" is more of a online management thing to politely
remove a perfectly functional disk you'd like to use for something
else.) As I understand things, the only way to do it right now is the
umount, remove disk, remount w/ degraded, and then btrfs add the new
device.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: How to remove a device on a RAID-1 before replacing it?
2011-03-29 20:21 ` cwillu
@ 2011-03-29 20:45 ` Helmut Hullen
2011-03-29 21:01 ` Andrew Lutomirski
1 sibling, 0 replies; 5+ messages in thread
From: Helmut Hullen @ 2011-03-29 20:45 UTC (permalink / raw)
To: linux-btrfs
Hallo, cwillu,
Du meintest am 29.03.11:
>> I have a disk with a SMART failure. =A0It still works but I assume
>> it'll fail sooner or later.
>>
>> I want to remove it from my btrfs volume, replace it, and add the
>> new one. =A0But the obvious command doesn't work:
[...]
> There's no "nice" way to remove a failing disk in btrfs right now
> ("btrfs dev delete" is more of a online management thing to politely
> remove a perfectly functional disk you'd like to use for something
> else.)
Nice hope, but even "btrfs device delete /dev/sdxn /mnt/btr" doesn't =20
work well.
I've tried it, with kernel 2.6.38.1 ...
# mkfs.btrfs -L SCSI -m raid0 /dev/sda1
# mount LABEL=3DSCSI /mnt/btr
# btrfs filesystem show
Label: 'SCSI' uuid: 4d834705-5a65-4d1f-a8a0-a5ea9348db50
Total devices 1 FS bytes used 28.00KB
devid 1 size 4.04GB used 20.00MB path /dev/sda1
Btrfs Btrfs v0.19
# btrfs filesystem df /mnt/btr
Data: total=3D8.00MB, used=3D0.00
System: total=3D4.00MB, used=3D8.00KB
Metadata: total=3D8.00MB, used=3D20.00KB
# df -t btrfs
=46ilesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 btrfs 4232556 28 4220224 1% /mnt/btr
# gdisk -l
/dev/sda
1 2048 8467166 4.0 GiB 0700 Linux/Windows =
data
#------------------------------------------------------------
# btrfs device add /dev/sdc1 /mnt/btr
# btrfs filesystem show
Label: 'SCSI' uuid: 4d834705-5a65-4d1f-a8a0-a5ea9348db50
Total devices 2 FS bytes used 28.00KB
devid 1 size 4.04GB used 433.31MB path /dev/sda1
devid 2 size 8.54GB used 0.00 path /dev/sdc1
Btrfs Btrfs v0.19
# btrfs filesystem df /mnt/btr
Data: total=3D421.31MB, used=3D0.00
System: total=3D4.00MB, used=3D4.00KB
Metadata: total=3D8.00MB, used=3D24.00KB
# df -t btrfs
=46ilesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 btrfs 13189632 28 13176256 1% /mnt/btr
# gdisk -l
/dev/sda
1 2048 8467166 4.0 GiB 0700 Linux/Windows =
data
/dev/sdc
1 2048 17916206 8.5 GiB 0700 Linux/Windows =
data
#------------------------------------------------------------
# 2 GByte kopiert
# btrfs filesystem show
Label: 'SCSI' uuid: 4d834705-5a65-4d1f-a8a0-a5ea9348db50
Total devices 2 FS bytes used 2.10GB
devid 1 size 4.04GB used 433.31MB path /dev/sda1
devid 2 size 8.54GB used 2.25GB path /dev/sdc1
Btrfs Btrfs v0.19
# btrfs filesystem df /mnt/btr
Data: total=3D2.41GB, used=3D2.10GB
System: total=3D4.00MB, used=3D4.00KB
Metadata: total=3D264.00MB, used=3D2.79MB
# df -t btrfs
=46ilesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 btrfs 13189632 2202740 10714232 18% /mnt/btr
# gdisk -l
/dev/sda
1 2048 8467166 4.0 GiB 0700 Linux/Windows =
data
/dev/sdc
1 2048 17916206 8.5 GiB 0700 Linux/Windows =
data
#------------------------------------------------------------
# 7 GByte kopiert
# btrfs filesystem show
Label: 'SCSI' uuid: 4d834705-5a65-4d1f-a8a0-a5ea9348db50
Total devices 2 FS bytes used 9.03GB
devid 1 size 4.04GB used 1.42GB path /dev/sda1
devid 2 size 8.54GB used 8.25GB path /dev/sdc1
Btrfs Btrfs v0.19
# btrfs filesystem df /mnt/btr
Data: total=3D9.41GB, used=3D9.02GB
System: total=3D4.00MB, used=3D4.00KB
Metadata: total=3D264.00MB, used=3D11.80MB
# df -t btrfs
=46ilesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 btrfs 13189632 9467164 3459032 74% /mnt/btr
# gdisk -l
/dev/sda
1 2048 8467166 4.0 GiB 0700 Linux/Windows =
data
/dev/sdc
1 2048 17916206 8.5 GiB 0700 Linux/Windows =
data
#------------------------------------------------------------
# btrfs device add /dev/sdb1 /mnt/btr
# btrfs filesystem show
Label: 'SCSI' uuid: 4d834705-5a65-4d1f-a8a0-a5ea9348db50
Total devices 3 FS bytes used 9.12GB
devid 3 size 136.73GB used 0.00 path /dev/sdb1
devid 1 size 4.04GB used 1.42GB path /dev/sda1
devid 2 size 8.54GB used 8.25GB path /dev/sdc1
Btrfs Btrfs v0.19
# btrfs filesystem df /mnt/btr
Data: total=3D9.41GB, used=3D9.10GB
System: total=3D4.00MB, used=3D4.00KB
Metadata: total=3D264.00MB, used=3D11.91MB
# df -t btrfs
=46ilesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 btrfs 156562592 9559000 146739216 7% /mnt/btr
# gdisk -l
/dev/sda
1 2048 8467166 4.0 GiB 0700 Linux/Windows =
data
/dev/sdc
1 2048 17916206 8.5 GiB 0700 Linux/Windows =
data
/dev/sdb
1 2048 286747966 136.7 GiB 0700 Linux/Windows =
data
#------------------------------------------------------------
# btrfs filesystem balance /mnt/btr
# btrfs filesystem show
Label: 'SCSI' uuid: 4d834705-5a65-4d1f-a8a0-a5ea9348db50
Total devices 3 FS bytes used 9.11GB
devid 3 size 136.73GB used 10.61GB path /dev/sdb1
devid 1 size 4.04GB used 3.70GB path /dev/sda1
devid 2 size 8.54GB used 8.08GB path /dev/sdc1
Btrfs Btrfs v0.19
# btrfs filesystem df /mnt/btr
Data, RAID0: total=3D21.89GB, used=3D9.10GB
System: total=3D4.00MB, used=3D4.00KB
Metadata, RAID0: total=3D511.88MB, used=3D10.15MB
# df -t btrfs
=46ilesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 btrfs 156562592 9557196 14370128 40% /mnt/btr
# gdisk -l
/dev/sda
1 2048 8467166 4.0 GiB 0700 Linux/Windows =
data
/dev/sdc
1 2048 17916206 8.5 GiB 0700 Linux/Windows =
data
/dev/sdb
1 2048 286747966 136.7 GiB 0700 Linux/Windows =
data
#------------------------------------------------------------
# 8 GByte kopiert
# btrfs filesystem show
Label: 'SCSI' uuid: 4d834705-5a65-4d1f-a8a0-a5ea9348db50
Total devices 3 FS bytes used 17.02GB
devid 3 size 136.73GB used 10.61GB path /dev/sdb1
devid 1 size 4.04GB used 3.70GB path /dev/sda1
devid 2 size 8.54GB used 8.08GB path /dev/sdc1
Btrfs Btrfs v0.19
# btrfs filesystem df /mnt/btr
Data, RAID0: total=3D21.89GB, used=3D17.00GB
System: total=3D4.00MB, used=3D4.00KB
Metadata, RAID0: total=3D511.88MB, used=3D18.66MB
# df -t btrfs
=46ilesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 btrfs 156562592 17850436 6085608 75% /mnt/btr
# gdisk -l
/dev/sda
1 2048 8467166 4.0 GiB 0700 Linux/Windows =
data
/dev/sdc
1 2048 17916206 8.5 GiB 0700 Linux/Windows =
data
/dev/sdb
1 2048 286747966 136.7 GiB 0700 Linux/Windows =
data
#------------------------------------------------------------
# btrfs filesystem balance /mnt/btr
# btrfs filesystem show
Label: 'SCSI' uuid: 4d834705-5a65-4d1f-a8a0-a5ea9348db50
Total devices 3 FS bytes used 17.08GB
devid 3 size 136.73GB used 9.08GB path /dev/sdb1
devid 1 size 4.04GB used 2.09GB path /dev/sda1
devid 2 size 8.54GB used 8.08GB path /dev/sdc1
Btrfs Btrfs v0.19
# btrfs filesystem df /mnt/btr
Data, RAID0: total=3D19.00GB, used=3D17.06GB
System: total=3D4.00MB, used=3D4.00KB
Metadata, RAID0: total=3D255.94MB, used=3D18.51MB
# df -t btrfs
=46ilesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda1 btrfs 156562592 17911352 6118356 75% /mnt/btr
# gdisk -l
/dev/sda
1 2048 8467166 4.0 GiB 0700 Linux/Windows =
data
/dev/sdc
1 2048 17916206 8.5 GiB 0700 Linux/Windows =
data
/dev/sdb
1 2048 286747966 136.7 GiB 0700 Linux/Windows =
data
#------------------------------------------------------------
and then follows the problem I've told last year: "no space left on =20
device ...". In this special case: even not enough place for deleting =20
one partition.
Trying "btrfs device delete ..." without previous "balance" leads to
# btrfs device delete /dev/sdc1 /mnt/btr
# endet mit "error removing the device '/dev/sdc1'"
Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: How to remove a device on a RAID-1 before replacing it?
2011-03-29 20:21 ` cwillu
2011-03-29 20:45 ` Helmut Hullen
@ 2011-03-29 21:01 ` Andrew Lutomirski
2011-03-29 21:15 ` Hugo Mills
1 sibling, 1 reply; 5+ messages in thread
From: Andrew Lutomirski @ 2011-03-29 21:01 UTC (permalink / raw)
To: cwillu; +Cc: linux-btrfs
On Tue, Mar 29, 2011 at 4:21 PM, cwillu <cwillu@cwillu.com> wrote:
> On Tue, Mar 29, 2011 at 2:09 PM, Andrew Lutomirski <luto@mit.edu> wro=
te:
>> I have a disk with a SMART failure. =A0It still works but I assume i=
t'll
>> fail sooner or later.
>>
>> I want to remove it from my btrfs volume, replace it, and add the ne=
w
>> one. =A0But the obvious command doesn't work:
>>
>> # btrfs device delete /dev/dm-5 /mnt/foo
>> ERROR: error removing the device '/dev/dm-5'
>>
>> dmesg says:
>> btrfs: unable to go below two devices on raid1
>>
>> With mdadm, I would fail the device, remove it, run degraded until I
>> get a new device, and hot-add that device.
>>
>> With btrfs, I'd like some confirmation from the fs that data is
>> balanced appropriately so I won't get data loss if I just yank the
>> drive. =A0And I don't even know how to tell btrfs to release the dri=
ve
>> so I can safely remove it.
>>
>> (Mounting with -o degraded doesn't help. =A0I could umount, remove t=
he
>> disk, then remount, but that feels like a hack.)
>
> There's no "nice" way to remove a failing disk in btrfs right now
> ("btrfs dev delete" is more of a online management thing to politely
> remove a perfectly functional disk you'd like to use for something
> else.) =A0As I understand things, the only way to do it right now is =
the
> umount, remove disk, remount w/ degraded, and then btrfs add the new
> device.
>
Well, the disk *is* perfectly functional. It just won't be for long.
I guess what I'm saying is that either btrfs dev delete isn't really
working -- I want to be able to convert to non-RAID and back or
degraged and back or something else equivalent.
--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: How to remove a device on a RAID-1 before replacing it?
2011-03-29 21:01 ` Andrew Lutomirski
@ 2011-03-29 21:15 ` Hugo Mills
0 siblings, 0 replies; 5+ messages in thread
From: Hugo Mills @ 2011-03-29 21:15 UTC (permalink / raw)
To: Andrew Lutomirski; +Cc: cwillu, linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 2752 bytes --]
On Tue, Mar 29, 2011 at 05:01:39PM -0400, Andrew Lutomirski wrote:
> On Tue, Mar 29, 2011 at 4:21 PM, cwillu <cwillu@cwillu.com> wrote:
> > On Tue, Mar 29, 2011 at 2:09 PM, Andrew Lutomirski <luto@mit.edu> wrote:
> >> I have a disk with a SMART failure. It still works but I assume it'll
> >> fail sooner or later.
> >>
> >> I want to remove it from my btrfs volume, replace it, and add the new
> >> one. But the obvious command doesn't work:
> >>
> >> # btrfs device delete /dev/dm-5 /mnt/foo
> >> ERROR: error removing the device '/dev/dm-5'
> >>
> >> dmesg says:
> >> btrfs: unable to go below two devices on raid1
> >>
> >> With mdadm, I would fail the device, remove it, run degraded until I
> >> get a new device, and hot-add that device.
> >>
> >> With btrfs, I'd like some confirmation from the fs that data is
> >> balanced appropriately so I won't get data loss if I just yank the
> >> drive. And I don't even know how to tell btrfs to release the drive
> >> so I can safely remove it.
> >>
> >> (Mounting with -o degraded doesn't help. I could umount, remove the
> >> disk, then remount, but that feels like a hack.)
> >
> > There's no "nice" way to remove a failing disk in btrfs right now
> > ("btrfs dev delete" is more of a online management thing to politely
> > remove a perfectly functional disk you'd like to use for something
> > else.) As I understand things, the only way to do it right now is the
> > umount, remove disk, remount w/ degraded, and then btrfs add the new
> > device.
> >
>
> Well, the disk *is* perfectly functional. It just won't be for long.
>
> I guess what I'm saying is that either btrfs dev delete isn't really
> working -- I want to be able to convert to non-RAID and back or
> degraged and back or something else equivalent.
RAID conversion isn't quite ready yet, sadly. As I understand it,
you've got two options:
- Yoink the drive (thus making the fs run in degraded mode), add the
new one, and balance to spread the duplicate data onto the new
volume.
- Add the new drive to the FS first, then use btrfs dev del to remove
the original device. This should end up writing all the replicated
data to the new drive as it "removes" the data from the old one.
Of the two options, the latter is (for me) the favourite, as you
don't end up with a filesystem that's running on just a single copy of
the data.
Hugo.
--
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
--- Prof Brain had been in search of The Truth for 25 years, with ---
the intention of putting it under house arrest.
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 190 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2011-03-29 21:15 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-29 20:09 How to remove a device on a RAID-1 before replacing it? Andrew Lutomirski
2011-03-29 20:21 ` cwillu
2011-03-29 20:45 ` Helmut Hullen
2011-03-29 21:01 ` Andrew Lutomirski
2011-03-29 21:15 ` Hugo Mills
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).