linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] hdd failure
@ 2004-08-01 18:22 Marius Gravdal
  0 siblings, 0 replies; 10+ messages in thread
From: Marius Gravdal @ 2004-08-01 18:22 UTC (permalink / raw)
  To: linux-lvm

One of the disks that made my LVM go bye-bye in the first place seems
to be dying aswell. Is there any way to make sure there's no data on
it before I remove it from the lv/vg? I've restored the lv, but I'm
reluctant to use it since one of the hdd's is spitting out a lot of
errors.

If I remember correctly it's one of the drives that I last put in, so
I don't think there's any data on it at all, but I'm not entirely
sure.

And if I do an lvreduce/vgreduce, how am I sure that it removes the
hdd I want to get out?

Sorry if I'm asking very obious questions here, but I'm not quite in
to the LVM logic yet :)

-- marius

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [linux-lvm] HDD Failure
@ 2006-09-18 18:10 Nick
  2006-09-18 19:08 ` Mitch Miller
  0 siblings, 1 reply; 10+ messages in thread
From: Nick @ 2006-09-18 18:10 UTC (permalink / raw)
  To: linux-lvm

Hello List,

I have a 3x 120GB HDD that are all lumped into one LVM volume. Recently,
one died so I removed it using.

# vgreduce --removemissing Vol1

This worked fine but now I can't mount the volume group:

root@nibiru:~# cat /etc/fstab | grep Vol
/dev/mapper/Vol1-share /share          ext3    defaults        0       2
root@nibiru:~# mount /share
mount: special device /dev/mapper/Vol1-share does not exist
root@nibiru:~#

I've no idea why it says it doesn't exist as I thought all I did was
remove missing PE's from a VG - not remove the actual VG!

Can anyone help me get my data back? Thanks for any advice.

Nick

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] HDD Failure
  2006-09-18 18:10 [linux-lvm] HDD Failure Nick
@ 2006-09-18 19:08 ` Mitch Miller
  2006-09-18 19:13   ` Nick
  0 siblings, 1 reply; 10+ messages in thread
From: Mitch Miller @ 2006-09-18 19:08 UTC (permalink / raw)
  To: LVM general discussion and development

Nick -- if I understand your situation properly, you effectively took 
one of the platters (the bad drive) out of the drive (the vg) and now 
you still want to be able to access the entire drive ... ??  I don't 
think that's going to happen.

Off to the backups, I'd say ...

-- Mitch



Nick wrote:
> Hello List,
> 
> I have a 3x 120GB HDD that are all lumped into one LVM volume. Recently,
> one died so I removed it using.
> 
> # vgreduce --removemissing Vol1
> 
> This worked fine but now I can't mount the volume group:
> 
> root@nibiru:~# cat /etc/fstab | grep Vol
> /dev/mapper/Vol1-share /share          ext3    defaults        0       2
> root@nibiru:~# mount /share
> mount: special device /dev/mapper/Vol1-share does not exist
> root@nibiru:~#
> 
> I've no idea why it says it doesn't exist as I thought all I did was
> remove missing PE's from a VG - not remove the actual VG!
> 
> Can anyone help me get my data back? Thanks for any advice.
> 
> Nick
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] HDD Failure
  2006-09-18 19:08 ` Mitch Miller
@ 2006-09-18 19:13   ` Nick
  2006-09-18 19:23     ` Fabien Jakimowicz
  0 siblings, 1 reply; 10+ messages in thread
From: Nick @ 2006-09-18 19:13 UTC (permalink / raw)
  To: LVM general discussion and development

Hi Mitch,

Thanks for the response. I understand I won't be able to get all the
data back but I have 2x120GB "good" drives left and want to get what's
left - is this possible?

Thanks, Nick

On Mon, 2006-09-18 at 14:08 -0500, Mitch Miller wrote:
> Nick -- if I understand your situation properly, you effectively took 
> one of the platters (the bad drive) out of the drive (the vg) and now 
> you still want to be able to access the entire drive ... ??  I don't 
> think that's going to happen.
> 
> Off to the backups, I'd say ...
> 
> -- Mitch

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] HDD Failure
  2006-09-18 19:13   ` Nick
@ 2006-09-18 19:23     ` Fabien Jakimowicz
  2006-09-18 19:34       ` Nick
  0 siblings, 1 reply; 10+ messages in thread
From: Fabien Jakimowicz @ 2006-09-18 19:23 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 820 bytes --]

On Mon, 2006-09-18 at 20:13 +0100, Nick wrote:
> Hi Mitch,
> 
> Thanks for the response. I understand I won't be able to get all the
> data back but I have 2x120GB "good" drives left and want to get what's
> left - is this possible?
> 
> Thanks, Nick
> 
> On Mon, 2006-09-18 at 14:08 -0500, Mitch Miller wrote:
> > Nick -- if I understand your situation properly, you effectively took 
> > one of the platters (the bad drive) out of the drive (the vg) and now 
> > you still want to be able to access the entire drive ... ??  I don't 
> > think that's going to happen.
> > 
> > Off to the backups, I'd say ...
> > 
> > -- Mitch
> 
did you have only one lv in your vg ?

if yes, you can go to your backups.

what does pvdisplay and vgdisplay says ?
-- 
Fabien Jakimowicz <fabien@jakimowicz.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] HDD Failure
  2006-09-18 19:23     ` Fabien Jakimowicz
@ 2006-09-18 19:34       ` Nick
  2006-09-18 19:37         ` Mark Krenz
  2006-09-18 19:42         ` Fabien Jakimowicz
  0 siblings, 2 replies; 10+ messages in thread
From: Nick @ 2006-09-18 19:34 UTC (permalink / raw)
  To: LVM general discussion and development

Hi Fabien,

Yes, just one LV - "Vol1-share". 

Does this mean I've lost *everything*? I would have though I should be
still be able to access everything on the two working disks? 

I don't have a backup of this data.

Thanks, Nick

root@nibiru:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/hda4
  VG Name               Vol1
  PV Size               106.79 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              27339
  Free PE               27339
  Allocated PE          0
  PV UUID               Cq9xKF-W33m-BCLt-YyIc-EEfm-Btqc-eZLHNh

  --- Physical volume ---
  PV Name               /dev/hdb1
  VG Name               Vol1
  PV Size               111.75 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              28609
  Free PE               28609
  Allocated PE          0
  PV UUID               hwQrhH-iXHO-Bots-6zUQ-w8JG-Nmb3-shqZiX

root@nibiru:~# vgdisplay
  --- Volume group ---
  VG Name               Vol1
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               218.55 GB
  PE Size               4.00 MB
  Total PE              55948
  Alloc PE / Size       0 / 0
  Free  PE / Size       55948 / 218.55 GB
  VG UUID               RORj4f-LAOJ-83YS-34lD-4YRM-FKP8-8hgXLg


On Mon, 2006-09-18 at 21:23 +0200, Fabien Jakimowicz wrote:
> did you have only one lv in your vg ?
> 
> if yes, you can go to your backups.
> 
> what does pvdisplay and vgdisplay says ?
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] HDD Failure
  2006-09-18 19:34       ` Nick
@ 2006-09-18 19:37         ` Mark Krenz
  2006-09-18 19:42         ` Fabien Jakimowicz
  1 sibling, 0 replies; 10+ messages in thread
From: Mark Krenz @ 2006-09-18 19:37 UTC (permalink / raw)
  To: LVM general discussion and development


  Nick,

  LVM != RAID

  You should have been doing RAID if you wanted to be able to handle the
failure of one drive.


On Mon, Sep 18, 2006 at 07:34:37PM GMT, Nick [lists@mogmail.net] said the following:
> Hi Fabien,
> 
> Yes, just one LV - "Vol1-share". 
> 
> Does this mean I've lost *everything*? I would have though I should be
> still be able to access everything on the two working disks? 
> 
> I don't have a backup of this data.
> 
> Thanks, Nick
> 
> root@nibiru:~# pvdisplay
>   --- Physical volume ---
>   PV Name               /dev/hda4
>   VG Name               Vol1
>   PV Size               106.79 GB / not usable 0
>   Allocatable           yes
>   PE Size (KByte)       4096
>   Total PE              27339
>   Free PE               27339
>   Allocated PE          0
>   PV UUID               Cq9xKF-W33m-BCLt-YyIc-EEfm-Btqc-eZLHNh
> 
>   --- Physical volume ---
>   PV Name               /dev/hdb1
>   VG Name               Vol1
>   PV Size               111.75 GB / not usable 0
>   Allocatable           yes
>   PE Size (KByte)       4096
>   Total PE              28609
>   Free PE               28609
>   Allocated PE          0
>   PV UUID               hwQrhH-iXHO-Bots-6zUQ-w8JG-Nmb3-shqZiX
> 
> root@nibiru:~# vgdisplay
>   --- Volume group ---
>   VG Name               Vol1
>   System ID
>   Format                lvm2
>   Metadata Areas        2
>   Metadata Sequence No  3
>   VG Access             read/write
>   VG Status             resizable
>   MAX LV                0
>   Cur LV                0
>   Open LV               0
>   Max PV                0
>   Cur PV                2
>   Act PV                2
>   VG Size               218.55 GB
>   PE Size               4.00 MB
>   Total PE              55948
>   Alloc PE / Size       0 / 0
>   Free  PE / Size       55948 / 218.55 GB
>   VG UUID               RORj4f-LAOJ-83YS-34lD-4YRM-FKP8-8hgXLg
> 
> 
> On Mon, 2006-09-18 at 21:23 +0200, Fabien Jakimowicz wrote:
> > did you have only one lv in your vg ?
> > 
> > if yes, you can go to your backups.
> > 
> > what does pvdisplay and vgdisplay says ?
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm@redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


-- 
Mark S. Krenz
IT Director
Suso Technology Services, Inc.
http://suso.org/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] HDD Failure
  2006-09-18 19:34       ` Nick
  2006-09-18 19:37         ` Mark Krenz
@ 2006-09-18 19:42         ` Fabien Jakimowicz
  2006-09-18 19:52           ` Nick
  1 sibling, 1 reply; 10+ messages in thread
From: Fabien Jakimowicz @ 2006-09-18 19:42 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 2793 bytes --]

On Mon, 2006-09-18 at 20:34 +0100, Nick wrote:
> Hi Fabien,
> 
> Yes, just one LV - "Vol1-share". 
> 
> Does this mean I've lost *everything*? I would have though I should be
> still be able to access everything on the two working disks? 
> 
> I don't have a backup of this data.
I'm sure you will now have at least one backup of all your data.
> 
> Thanks, Nick
> 
> root@nibiru:~# pvdisplay
>   --- Physical volume ---
>   PV Name               /dev/hda4
>   VG Name               Vol1
>   PV Size               106.79 GB / not usable 0
>   Allocatable           yes
>   PE Size (KByte)       4096
>   Total PE              27339
>   Free PE               27339
>   Allocated PE          0
>   PV UUID               Cq9xKF-W33m-BCLt-YyIc-EEfm-Btqc-eZLHNh
> 
>   --- Physical volume ---
>   PV Name               /dev/hdb1
>   VG Name               Vol1
>   PV Size               111.75 GB / not usable 0
>   Allocatable           yes
>   PE Size (KByte)       4096
>   Total PE              28609
>   Free PE               28609
>   Allocated PE          0
>   PV UUID               hwQrhH-iXHO-Bots-6zUQ-w8JG-Nmb3-shqZiX
> 
> root@nibiru:~# vgdisplay
>   --- Volume group ---
>   VG Name               Vol1
>   System ID
>   Format                lvm2
>   Metadata Areas        2
>   Metadata Sequence No  3
>   VG Access             read/write
>   VG Status             resizable
>   MAX LV                0
>   Cur LV                0
>   Open LV               0
>   Max PV                0
>   Cur PV                2
>   Act PV                2
>   VG Size               218.55 GB
>   PE Size               4.00 MB
>   Total PE              55948
>   Alloc PE / Size       0 / 0
>   Free  PE / Size       55948 / 218.55 GB
>   VG UUID               RORj4f-LAOJ-83YS-34lD-4YRM-FKP8-8hgXLg
As you can see, you have no remaining LV in this VG. And both of your PV
are empty (Total == Free).

Now if you read vgreduce manual page :
       --removemissing
              Removes  all  missing physical volumes from the volume
group and makes the volume group consistent again.
              It's a good idea to run this option with --test  first  to
find out what it would remove before running it for real.
              Any  logical volumes and dependent snapshots that were
partly on the missing disks get removed completely.  This  includes
those parts that lie on disks that are still present.
              If your logical volumes spanned several disks including
the ones that are lost, you might want to try to salvage  data  first
by activating  your  logical volumes with --partial as described in
lvm (8).

you can see that you've just lost all of your data.
-- 
Fabien Jakimowicz <fabien@jakimowicz.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] HDD Failure
  2006-09-18 19:42         ` Fabien Jakimowicz
@ 2006-09-18 19:52           ` Nick
  2006-09-18 19:57             ` Fabien Jakimowicz
  0 siblings, 1 reply; 10+ messages in thread
From: Nick @ 2006-09-18 19:52 UTC (permalink / raw)
  To: LVM general discussion and development

Oh well, better setup RAID this time!! Thanks for your (And the others
who replied) help, I appreciate it even though it's bad news.

Nick

On Mon, 2006-09-18 at 21:42 +0200, Fabien Jakimowicz wrote:
> you can see that you've just lost all of your data.
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] HDD Failure
  2006-09-18 19:52           ` Nick
@ 2006-09-18 19:57             ` Fabien Jakimowicz
  0 siblings, 0 replies; 10+ messages in thread
From: Fabien Jakimowicz @ 2006-09-18 19:57 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 385 bytes --]

On Mon, 2006-09-18 at 20:52 +0100, Nick wrote:
> Oh well, better setup RAID this time!! Thanks for your (And the others
> who replied) help, I appreciate it even though it's bad news.
> 
> Nick
You can also run a lvm over raid configuration, but you should really
test it before using it, just to exactly know what you are doing.
-- 
Fabien Jakimowicz <fabien@jakimowicz.com>

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2006-09-18 19:58 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-08-01 18:22 [linux-lvm] hdd failure Marius Gravdal
  -- strict thread matches above, loose matches on Subject: below --
2006-09-18 18:10 [linux-lvm] HDD Failure Nick
2006-09-18 19:08 ` Mitch Miller
2006-09-18 19:13   ` Nick
2006-09-18 19:23     ` Fabien Jakimowicz
2006-09-18 19:34       ` Nick
2006-09-18 19:37         ` Mark Krenz
2006-09-18 19:42         ` Fabien Jakimowicz
2006-09-18 19:52           ` Nick
2006-09-18 19:57             ` Fabien Jakimowicz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).