* [linux-lvm] /dev/vg1/dir-1: read failed after 0 of 4096 at ...
@ 2012-05-30 10:10 Tero
2012-05-30 10:27 ` Tero
0 siblings, 1 reply; 2+ messages in thread
From: Tero @ 2012-05-30 10:10 UTC (permalink / raw)
To: ubuntu-users, linux-lvm
Hi,
I really messed up my LVM settings. I am using Ubuntu 10.04 server and
my updates were up-to-date. Here is the whole story:
1. I have got four (4) identical 320 GB hard disks: "/dev/sda",
"/dev/sdb", "/dev/sdc" and "/dev/sdd" and one (1) 1 TB "/dev/sde".
2. Each 320 GB disk is paritioned to 1 GB + 319 GB ("/dev/sdX1" and
"/dev/sdX2") and 1 TB disk has only one 1 TB parition.
3. There is RAID-1 (/dev/md0) on the "/dev/sdX1" paritions and RAID-5
(/dev/md1) on the "/dev/sdX2" partions.
4. LVM volume group "vg1" was set at first only on /dev/md1.
5. On vg1 there was only logical volumes: "root", "dir-1" and "dir-2".
6. Later I attached hard disk /dev/sde and attached it to volume group "vg1"
7. I created logical volume drbd with command something like "lvcreate
-L 800G -n drbd vg1 /dev/sde1"
8. I was setting DRBD device on logical volume "drbd". I made some
changes to "filter" parameter in "/etc/lvm/lvm.conf".
9. I noticed that there "PV unknown device" and I accidentally removed
it with command (I am not 100% sure of this) "vgreduce --removemissing
vg1". If I remember correctly used switch "-f" too. :-(
10. Then I found out that system was acting strangely. If I remember
correct it prompted something "Couldn't find device with uuid...". Then
I realized how stupid I had been!
11. Then I followed instructions on
"http://support.citrix.com/article/CTX116095":
11.1. I booted with installation disk to installation environment
11.2. I run: "vgdisplay --partial --verbose"
11.3.Then I run "pvcreate --restorefile /etc/lvm/backup/vg1 --uuid
UUID /dev/sde1 -ff"
11.4. Then again: "vgdisplay --partial --verbose"
11.5. At last: "vgcfgrestore --file /etc/lvm/backup/vg1 vg1"
12. I removed volume group "drbd".
Notice: device /dev/sde was attached all the time.
13. After this running commands:
# pvs
/dev/vg1/dir-1: read failed after 0 of 4096 at 299997528064:
Input/output error
/dev/vg1/dir-1: read failed after 0 of 4096 at 299997585408:
Input/output error
/dev/vg1/dir-1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg1/dir-1: read failed after 0 of 4096 at 4096: Input/output error
/dev/vg1/dir-1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg1/dir-2: read failed after 0 of 4096 at 504658591744:
Input/output error
/dev/vg1/dir-2: read failed after 0 of 4096 at 504658649088:
Input/output error
/dev/vg1/dir-2: read failed after 0 of 4096 at 0: Input/output error
/dev/vg1/dir-2: read failed after 0 of 4096 at 4096: Input/output error
/dev/vg1/dir-2: read failed after 0 of 4096 at 0: Input/output error
/dev/vg1/root: read failed after 0 of 4096 at 14331871232: Input/output
error
/dev/vg1/root: read failed after 0 of 4096 at 14331928576: Input/output
error
/dev/vg1/root: read failed after 0 of 4096 at 0: Input/output error
/dev/vg1/root: read failed after 0 of 4096 at 4096: Input/output error
/dev/vg1/root: read failed after 0 of 4096 at 0: Input/output error
PV VG Fmt Attr PSize PFree
/dev/md1 lvm2 -- 891.46g 891.46g
/dev/sde1 vg1 lvm2 a- 931.51g 931.51g
# lvs
LV VG Attr LSize
dir-1 vg1 vwi-a- 279.39g
dir-2 vg1 vwi-a- 470.00g
root vg1 vwi-a- 13.35g
Is there any change to recover LVM from this point? I don't care if I
loose the newest data but I really want to salvage at least the older data.
Tero
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [linux-lvm] /dev/vg1/dir-1: read failed after 0 of 4096 at ...
2012-05-30 10:10 [linux-lvm] /dev/vg1/dir-1: read failed after 0 of 4096 at Tero
@ 2012-05-30 10:27 ` Tero
0 siblings, 0 replies; 2+ messages in thread
From: Tero @ 2012-05-30 10:27 UTC (permalink / raw)
To: Ubuntu user technical support, not for general discussions; +Cc: linux-lvm
On 30.5.2012 13:10, Tero wrote:
> Hi,
>
> I really messed up my LVM settings. I am using Ubuntu 10.04 server and
> my updates were up-to-date. Here is the whole story:
>
> 1. I have got four (4) identical 320 GB hard disks: "/dev/sda",
> "/dev/sdb", "/dev/sdc" and "/dev/sdd" and one (1) 1 TB "/dev/sde".
> 2. Each 320 GB disk is paritioned to 1 GB + 319 GB ("/dev/sdX1" and
> "/dev/sdX2") and 1 TB disk has only one 1 TB parition.
> 3. There is RAID-1 (/dev/md0) on the "/dev/sdX1" paritions and RAID-5
> (/dev/md1) on the "/dev/sdX2" partions.
> 4. LVM volume group "vg1" was set at first only on /dev/md1.
> 5. On vg1 there was only logical volumes: "root", "dir-1" and "dir-2".
> 6. Later I attached hard disk /dev/sde and attached it to volume group "vg1"
> 7. I created logical volume drbd with command something like "lvcreate
> -L 800G -n drbd vg1 /dev/sde1"
> 8. I was setting DRBD device on logical volume "drbd". I made some
> changes to "filter" parameter in "/etc/lvm/lvm.conf".
> 9. I noticed that there "PV unknown device" and I accidentally removed
> it with command (I am not 100% sure of this) "vgreduce --removemissing
> vg1". If I remember correctly used switch "-f" too. :-(
> 10. Then I found out that system was acting strangely. If I remember
> correct it prompted something "Couldn't find device with uuid...". Then
> I realized how stupid I had been!
> 11. Then I followed instructions on
> "http://support.citrix.com/article/CTX116095":
> 11.1. I booted with installation disk to installation environment
> 11.2. I run: "vgdisplay --partial --verbose"
> 11.3.Then I run "pvcreate --restorefile /etc/lvm/backup/vg1 --uuid
> UUID /dev/sde1 -ff"
> 11.4. Then again: "vgdisplay --partial --verbose"
> 11.5. At last: "vgcfgrestore --file /etc/lvm/backup/vg1 vg1"
> 12. I removed volume group "drbd".
> Notice: device /dev/sde was attached all the time.
> 13. After this running commands:
>
> # pvs
> /dev/vg1/dir-1: read failed after 0 of 4096 at 299997528064:
> Input/output error
> /dev/vg1/dir-1: read failed after 0 of 4096 at 299997585408:
> Input/output error
> /dev/vg1/dir-1: read failed after 0 of 4096 at 0: Input/output error
> /dev/vg1/dir-1: read failed after 0 of 4096 at 4096: Input/output error
> /dev/vg1/dir-1: read failed after 0 of 4096 at 0: Input/output error
>
> /dev/vg1/dir-2: read failed after 0 of 4096 at 504658591744:
> Input/output error
> /dev/vg1/dir-2: read failed after 0 of 4096 at 504658649088:
> Input/output error
> /dev/vg1/dir-2: read failed after 0 of 4096 at 0: Input/output error
> /dev/vg1/dir-2: read failed after 0 of 4096 at 4096: Input/output error
> /dev/vg1/dir-2: read failed after 0 of 4096 at 0: Input/output error
>
> /dev/vg1/root: read failed after 0 of 4096 at 14331871232: Input/output
> error
> /dev/vg1/root: read failed after 0 of 4096 at 14331928576: Input/output
> error
> /dev/vg1/root: read failed after 0 of 4096 at 0: Input/output error
> /dev/vg1/root: read failed after 0 of 4096 at 4096: Input/output error
> /dev/vg1/root: read failed after 0 of 4096 at 0: Input/output error
>
> PV VG Fmt Attr PSize PFree
> /dev/md1 lvm2 -- 891.46g 891.46g
> /dev/sde1 vg1 lvm2 a- 931.51g 931.51g
>
> # lvs
> LV VG Attr LSize
> dir-1 vg1 vwi-a- 279.39g
> dir-2 vg1 vwi-a- 470.00g
> root vg1 vwi-a- 13.35g
>
> Is there any change to recover LVM from this point? I don't care if I
> loose the newest data but I really want to salvage at least the older data.
>
>
> Tero
>
Actually this is much like what happened to me:
http://bisqwit.iki.fi/story/howto/undopvremove/
Correction to item 9. where I claimed that I used command "vgreduce". I
am pretty sure I used "pvremove" instead as told in the article "How to
undo pvremove".
Tero
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2012-05-30 10:28 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-30 10:10 [linux-lvm] /dev/vg1/dir-1: read failed after 0 of 4096 at Tero
2012-05-30 10:27 ` Tero
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).