* Can this array be recovered?
@ 2012-12-25 4:14 Jaap Winius
2012-12-25 6:05 ` Phil Turmel
0 siblings, 1 reply; 5+ messages in thread
From: Jaap Winius @ 2012-12-25 4:14 UTC (permalink / raw)
To: linux-raid
Hi folks,
While attempting to grow a RAID1 array after replacing the disks, it
looks like I've messed things up a bit. I started with this:
~# mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Sat Jun 19 23:34:39 2010
Raid Level : raid1
Array Size : 767130560 (731.59 GiB 785.54 GB)
Used Dev Size : 767130560 (731.59 GiB 785.54 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Mon Dec 24 21:02:13 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 9be57e97:2c46675a:b5a3dfee:de98bb27
Events : 0.2246
Number Major Minor RaidDevice State
0 8 66 0 active sync /dev/sde2
1 8 82 1 active sync /dev/sdf2
Growing the array at this point wasn't working, because for some reason
the size of one of the two partitions (/dev/sdf2) used for the array was
still being reported with its previous smaller size (767130560) in /sys/
devices/virtual/block/md1/md/dev-sde2/size.
So, I stopped the array to experiment with the "--update=devicesize"
option, but then stupidly issued these commands:
~# mdadm --assemble /dev/md1 --update=devicesize /dev/sdf2
mdadm: /dev/md1 assembled from 1 drive - need all 2 to start it (use --
run to insist).
~# mdadm --assemble /dev/md1 --update=devicesize /dev/sdf2 /dev/sde2
mdadm: cannot open device /dev/sdf2: Device or resource busy
mdadm: /dev/sdf2 has no superblock - assembly aborted
After this, the array refused to respond to any commands until I stopped
it once more. However, it then refused to start up again (assemble). So I
saw somewhere that I might be able to recover it with a create command:
~# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sd[ef]2
mdadm: /dev/sde2 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Jun 19 23:34:39 2010
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: /dev/sdf2 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Jun 19 23:34:39 2010
mdadm: largest drive (/dev/sde2) exceeds size (767129464K) by more than 1%
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
Perhaps this was an even more destructive thing to do. Anyway, the array
now looks like this:
~# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Dec 25 02:07:11 2012
Raid Level : raid1
Array Size : 767129464 (731.59 GiB 785.54 GB)
Used Dev Size : 767129464 (731.59 GiB 785.54 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Dec 25 02:07:11 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : bitis:1 (local to host bitis)
UUID : a8eca0e2:dd9941a3:03d8b0f0:d11c9994
Events : 0
Number Major Minor RaidDevice State
0 8 66 0 active sync /dev/sde2
1 8 82 1 active sync /dev/sdf2
It now has a different UUID (that was expected), but it also seems to
contain no data. I was hoping to find an LVM physical volume, but pvscan
is not detecting anything on the array. Oops. Luckily the data wasn't too
important, but the loss is nevertheless irritating.
Might there still be a way to recover my lost LVM data, or is this
situation hopeless?
Thanks,
Jaap
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Can this array be recovered?
2012-12-25 4:14 Can this array be recovered? Jaap Winius
@ 2012-12-25 6:05 ` Phil Turmel
2012-12-25 15:24 ` Jaap Winius
0 siblings, 1 reply; 5+ messages in thread
From: Phil Turmel @ 2012-12-25 6:05 UTC (permalink / raw)
To: Jaap Winius; +Cc: linux-raid
On 12/24/2012 11:14 PM, Jaap Winius wrote:
> Hi folks,
>
> While attempting to grow a RAID1 array after replacing the disks, it
> looks like I've messed things up a bit. I started with this:
>
> ~# mdadm --detail /dev/md1
> /dev/md1:
> Version : 0.90
> Creation Time : Sat Jun 19 23:34:39 2010
> Raid Level : raid1
> Array Size : 767130560 (731.59 GiB 785.54 GB)
> Used Dev Size : 767130560 (731.59 GiB 785.54 GB)
> Raid Devices : 2
> Total Devices : 2
> Preferred Minor : 1
> Persistence : Superblock is persistent
>
> Update Time : Mon Dec 24 21:02:13 2012
> State : clean
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
>
> UUID : 9be57e97:2c46675a:b5a3dfee:de98bb27
> Events : 0.2246
>
> Number Major Minor RaidDevice State
> 0 8 66 0 active sync /dev/sde2
> 1 8 82 1 active sync /dev/sdf2
>
> Growing the array at this point wasn't working, because for some reason
> the size of one of the two partitions (/dev/sdf2) used for the array was
> still being reported with its previous smaller size (767130560) in /sys/
> devices/virtual/block/md1/md/dev-sde2/size.
>
> So, I stopped the array to experiment with the "--update=devicesize"
> option, but then stupidly issued these commands:
>
> ~# mdadm --assemble /dev/md1 --update=devicesize /dev/sdf2
> mdadm: /dev/md1 assembled from 1 drive - need all 2 to start it (use --
> run to insist).
> ~# mdadm --assemble /dev/md1 --update=devicesize /dev/sdf2 /dev/sde2
> mdadm: cannot open device /dev/sdf2: Device or resource busy
> mdadm: /dev/sdf2 has no superblock - assembly aborted
The array was partially assembled, but not started. So /dev/md1 and
/dev/sdf2 were busy. You should have stopped /dev/md1 before the
re-assembly attempt. Too late, now.
> After this, the array refused to respond to any commands until I stopped
> it once more. However, it then refused to start up again (assemble). So I
> saw somewhere that I might be able to recover it with a create command:
>
> ~# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sd[ef]2
> mdadm: /dev/sde2 appears to be part of a raid array:
> level=raid1 devices=2 ctime=Sat Jun 19 23:34:39 2010
> mdadm: Note: this array has metadata at the start and
> may not be suitable as a boot device. If you plan to
> store '/boot' on this device please ensure that
> your boot-loader understands md/v1.x metadata, or use
> --metadata=0.90
> mdadm: /dev/sdf2 appears to be part of a raid array:
> level=raid1 devices=2 ctime=Sat Jun 19 23:34:39 2010
> mdadm: largest drive (/dev/sde2) exceeds size (767129464K) by more than 1%
> Continue creating array? y
> mdadm: Defaulting to version 1.2 metadata
> mdadm: array /dev/md1 started.
It warned you :-). You said "y" and destroyed your LVM metadata.
Modern mdadm defaults to metadata v1.2, placed near the beginning of the
device. The original v0.90 metadata is placed at the end of the device.
> Perhaps this was an even more destructive thing to do. Anyway, the array
> now looks like this:
>
> ~# mdadm --detail /dev/md1
> /dev/md1:
> Version : 1.2
> Creation Time : Tue Dec 25 02:07:11 2012
> Raid Level : raid1
> Array Size : 767129464 (731.59 GiB 785.54 GB)
> Used Dev Size : 767129464 (731.59 GiB 785.54 GB)
> Raid Devices : 2
> Total Devices : 2
> Persistence : Superblock is persistent
>
> Update Time : Tue Dec 25 02:07:11 2012
> State : clean
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
>
> Name : bitis:1 (local to host bitis)
> UUID : a8eca0e2:dd9941a3:03d8b0f0:d11c9994
> Events : 0
>
> Number Major Minor RaidDevice State
> 0 8 66 0 active sync /dev/sde2
> 1 8 82 1 active sync /dev/sdf2
>
> It now has a different UUID (that was expected), but it also seems to
> contain no data. I was hoping to find an LVM physical volume, but pvscan
> is not detecting anything on the array. Oops. Luckily the data wasn't too
> important, but the loss is nevertheless irritating.
>
> Might there still be a way to recover my lost LVM data, or is this
> situation hopeless?
Stop the array. Create it again, but specify v0.90 or v1.0 metadata.
(v1.0 also puts the superblock at the end.)
Since v1.2 metadata skips the first 4k, the critical part of the LVM
metadata is probably intact, and it'll "just work".
After you recover your data, you should consider rebuilding your array
with default (v1.2) metadata for everything other than your boot filesystem.
HTH, and Merry Christmas.
Phil
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Can this array be recovered?
2012-12-25 6:05 ` Phil Turmel
@ 2012-12-25 15:24 ` Jaap Winius
2012-12-25 15:56 ` Jaap Winius
0 siblings, 1 reply; 5+ messages in thread
From: Jaap Winius @ 2012-12-25 15:24 UTC (permalink / raw)
To: linux-raid
On Tue, 25 Dec 2012 01:05:35 -0500, Phil Turmel wrote:
> Stop the array. Create it again, but specify v0.90 or v1.0 metadata.
> (v1.0 also puts the superblock at the end.)
Okay, I used this command:
# mdadm --create /dev/md1 --metadata=0.90 --level=1 --raid-devices=2 /dev/
sd[ef]2
> Since v1.2 metadata skips the first 4k, the critical part of the LVM
> metadata is probably intact, and it'll "just work".
I like that you're so optimistic, but unfortunately pvscan still doesn't
see anything on /dev/md1 and now also mentions "Incorrect metadata area
header checksum" when run.
> After you recover your data, you should consider rebuilding your array
> with default (v1.2) metadata for everything other than your boot
> filesystem.
If I manage to get my array back, I will definitely give that a try.
Christmas cheers,
Jaap
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Can this array be recovered?
2012-12-25 15:24 ` Jaap Winius
@ 2012-12-25 15:56 ` Jaap Winius
2012-12-25 17:06 ` Jaap Winius
0 siblings, 1 reply; 5+ messages in thread
From: Jaap Winius @ 2012-12-25 15:56 UTC (permalink / raw)
To: linux-raid
On Tue, 25 Dec 2012 15:24:23 +0000, Jaap Winius wrote:
> ... pvscan still doesn't see anything on /dev/md1
> and now also mentions "Incorrect metadata area
> header checksum" when run.
The output of this command looks promising:
# pvscan -u
Incorrect metadata area header checksum
PV /dev/md2 with UUID 0iEU34-XNSd-vNpu-G3uW-2F1v-wEy9-Bw199N VG
volgrp3 lvm2 [931.51 GiB / 11.51 GiB free]
PV /dev/md0 with UUID Lb3KzG-9tZM-5MbQ-1DUa-5ErW-DYf1-NqvMJW VG
volgrp1 lvm2 [697.70 GiB / 5.44 GiB free]
PV /dev/md1 with UUID uWDdXr-8fRw-aJDG-SzdQ-
yeIl-6dsv-28vJYs lvm2 [697.70 GiB]
Total: 3 [2.27 TiB] / in use: 2 [1.59 TiB] / in no VG: 1 [697.70 GiB]
That last UUID is still associated with a bunch of volgrp2 objects in my
current LVM configuration (the old LVM volume on /dev/md1), even though
pvscan currently doesn't recognize it as such.
Christmas cheers,
Jaap
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Can this array be recovered?
2012-12-25 15:56 ` Jaap Winius
@ 2012-12-25 17:06 ` Jaap Winius
0 siblings, 0 replies; 5+ messages in thread
From: Jaap Winius @ 2012-12-25 17:06 UTC (permalink / raw)
To: linux-raid
On Tue, 25 Dec 2012 15:56:38 +0000, Jaap Winius wrote:
> # pvscan -u
> Incorrect metadata area header checksum PV /dev/md2 with UUID
> 0iEU34-XNSd-vNpu-G3uW-2F1v-wEy9-Bw199N VG
> volgrp3 lvm2 [931.51 GiB / 11.51 GiB free]
> PV /dev/md0 with UUID Lb3KzG-9tZM-5MbQ-1DUa-5ErW-DYf1-NqvMJW VG
> volgrp1 lvm2 [697.70 GiB / 5.44 GiB free]
> PV /dev/md1 with UUID uWDdXr-8fRw-aJDG-SzdQ-
> yeIl-6dsv-28vJYs lvm2 [697.70 GiB]
> Total: 3 [2.27 TiB] / in use: 2 [1.59 TiB] / in no VG: 1 [697.70 GiB]
>
> That last UUID is still associated with a bunch of volgrp2 objects in my
> current LVM configuration (the old LVM volume on /dev/md1), even though
> pvscan currently doesn't recognize it as such.
All that was needed at this point involved some LVM recovery procedures.
I did it like this:
# pvcreate --uuid uWDdXr-8fRw-aJDG-SzdQ-yeIl-6dsv-28vJYs --restorefile /
etc/lvm/archive/volgrp2_00007.vg /dev/md1
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Physical volume "/dev/md1" successfully created
This restored my original LVM physical volume. However, the original
volume group was missing. To fix that I created one with the same name on
that same physical device:
# vgcreate -v volgrp2 /dev/md1
Wiping cache of LVM-capable devices
Wiping cache of LVM-capable devices
Adding physical volume '/dev/md1' to volume group 'volgrp2'
Archiving volume group "volgrp2" metadata (seqno 0).
Creating volume group backup "/etc/lvm/backup/volgrp2" (seqno 1).
Volume group "volgrp2" successfully created
Okay, but at this point the problem was that my logical volumes had yet
to be restored
# vgcfgrestore -f /u21/backup/bitis/etc/2012-12-24@01:02:12/lvm/backup/
volgrp2 volgrp2
Restored volume group volgrp2
This restored the volume group descriptor area from an older automatic
backup file for this volume group. It worked: my old vicepb volume was
back! Still, I couldn't mount it until I did this:
~# vgchange -ay
3 logical volume(s) in volume group "volgrp3" now active
1 logical volume(s) in volume group "volgrp2" now active
6 logical volume(s) in volume group "volgrp1" now active
Finally, this allowed me to mount my volume and make all my data
available once again.
Thanks Phil!
Chritmas cheers,
Jaap
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2012-12-25 17:06 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-12-25 4:14 Can this array be recovered? Jaap Winius
2012-12-25 6:05 ` Phil Turmel
2012-12-25 15:24 ` Jaap Winius
2012-12-25 15:56 ` Jaap Winius
2012-12-25 17:06 ` Jaap Winius
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).