linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] failing hard drive
@ 2007-03-22 15:36 Tim Milstead
  2007-03-22 15:54 ` Bryn M. Reeves
  2007-03-22 22:33 ` Lamont Peterson
  0 siblings, 2 replies; 7+ messages in thread
From: Tim Milstead @ 2007-03-22 15:36 UTC (permalink / raw)
  To: linux-lvm

I am a complete noob to lvm.
I have 9 partitions on 9 (190GB) disks, one of which (according to smart 
tools) is failing - hde.
It is running Fedora Core 4 (don't ask)
I want to swap out the drive with a new one (400GB).
pvmove /dev/hde does not work even though the file system in not full. 
It says there are no free extents (or something like that - I don't want 
to needlessly turn the machine back on since hde is getting worse) I 
guess this is because the underlying stuff is full up.
It would be nice if I could just dd the failing drive onto the new drive 
and replace it (using linux on a cd) but I have no reason to believe 
this will work - will it?
Has anyone got a step by step guide of what to do? I guess I must shrink 
the filesystem and then whatever that sits on.
This seems to depend on the version of LVM one is running. I have no 
idea what version comes with Fedora Core 4 or how to find out.

Any ideas welcome.

Thanks,

Tim.

This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] failing hard drive
  2007-03-22 15:36 [linux-lvm] failing hard drive Tim Milstead
@ 2007-03-22 15:54 ` Bryn M. Reeves
  2007-03-22 20:58   ` Tim Milstead
  2007-03-22 22:33 ` Lamont Peterson
  1 sibling, 1 reply; 7+ messages in thread
From: Bryn M. Reeves @ 2007-03-22 15:54 UTC (permalink / raw)
  To: LVM general discussion and development

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Tim,

> It is running Fedora Core 4 (don't ask)
> I want to swap out the drive with a new one (400GB).
> pvmove /dev/hde does not work even though the file system in not full.
> It says there are no free extents (or something like that - I don't want
> to needlessly turn the machine back on since hde is getting worse) I
> guess this is because the underlying stuff is full up.

You're right - it's not the file system being full here, rather it's the
volume group (VG). You can see this by running either:

# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  VolGroup00   1   2   0 wz--n- 33.81G 32.00M

# vgdisplay
  --- Volume group ---
  VG Name               VolGroup00
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               33.81 GB
  PE Size               32.00 MB
  Total PE              1082
  Alloc PE / Size       1081 / 33.78 GB
  Free  PE / Size       1 / 32.00 MB
  VG UUID               R3OVax-yaLB-4SDX-V05w-BQSM-eKfV-dnLE2P

> It would be nice if I could just dd the failing drive onto the new drive
> and replace it (using linux on a cd) but I have no reason to believe
> this will work - will it?

This should work OK, although as usual with backups you want to make
sure that nothing is writing to the disk while you take the dd - using a
rescue CD would be fine, or deactivating the volume group before
starting (but if it includes your root file system then you will need to
use a rescue CD).

> Has anyone got a step by step guide of what to do? I guess I must shrink
> the filesystem and then whatever that sits on.

That may work, but you'll need to make enough space within the VG to
accommodate all the data that is currently stored on the failing hde.

To do this, you first have to shrink file systems from the VG, then
shrink the logical volumes (LVs) that the file systems are sitting on.

Another option would be to add the new drive to the system and run:

pvcreate /path/to/new/disk

followed by:

vgextend <VG name> /path/to/new/disk

To temporarily bring the VG up to 10 disks to allow you to remove the
failing member. You should then find the "pvmove /dev/hde" works as
expected (assuming the new disk is at least as big as the one you are
replacing).

> This seems to depend on the version of LVM one is running. I have no
> idea what version comes with Fedora Core 4 or how to find out.

You'll have lvm2 in FC4, although it's a relatively old version now.

Kind regards,

Bryn.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFGAqbK6YSQoMYUY94RAlG0AJwPGQ60wQ6NyjnouTL9/NsY0fyrwACfdGeS
pyksRh8UAdGrSwlbm2HJ+tI=
=39ku
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] failing hard drive
  2007-03-22 15:54 ` Bryn M. Reeves
@ 2007-03-22 20:58   ` Tim Milstead
  2007-03-22 21:31     ` Bryn M. Reeves
  0 siblings, 1 reply; 7+ messages in thread
From: Tim Milstead @ 2007-03-22 20:58 UTC (permalink / raw)
  To: LVM general discussion and development

Bryn M. Reeves wrote:

SNIP
>> It would be nice if I could just dd the failing drive onto the new drive
>> and replace it (using linux on a cd) but I have no reason to believe
>> this will work - will it?
>>     
>
> This should work OK, although as usual with backups you want to make
> sure that nothing is writing to the disk while you take the dd - using a
> rescue CD would be fine, or deactivating the volume group before
> starting (but if it includes your root file system then you will need to
> use a rescue CD).
>   
I have made the copy using a rescue CD and an external USB drive. 
Fitting the replacement internally is going to be difficult hence the 
question about doing it this way. Are you sure? I was just worried that 
perhaps LVM looked beyond a simple '/dev/hde' referred to drives in some 
deeper way e.g. serial number, model, make etc.
>   
>> Has anyone got a step by step guide of what to do? I guess I must shrink
>> the filesystem and then whatever that sits on.
>>     
>
> That may work, but you'll need to make enough space within the VG to
> accommodate all the data that is currently stored on the failing hde.
>
> To do this, you first have to shrink file systems from the VG, then
> shrink the logical volumes (LVs) that the file systems are sitting on.
>
> Another option would be to add the new drive to the system and run:
>
> pvcreate /path/to/new/disk
>
> followed by:
>
> vgextend <VG name> /path/to/new/disk
>   
Thanks.
> To temporarily bring the VG up to 10 disks to allow you to remove the
> failing member. You should then find the "pvmove /dev/hde" works as
> expected (assuming the new disk is at least as big as the one you are
> replacing).
>   
Yes, but given the space constraints I'd rather avoid this.
>   
>> This seems to depend on the version of LVM one is running. I have no
>> idea what version comes with Fedora Core 4 or how to find out.
>>     
>
> You'll have lvm2 in FC4, although it's a relatively old version now.
>
> Kind regards,
>
> Bryn.
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.7 (GNU/Linux)
> Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
>
> iD8DBQFGAqbK6YSQoMYUY94RAlG0AJwPGQ60wQ6NyjnouTL9/NsY0fyrwACfdGeS
> pyksRh8UAdGrSwlbm2HJ+tI=
> =39ku
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>   

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] failing hard drive
  2007-03-22 20:58   ` Tim Milstead
@ 2007-03-22 21:31     ` Bryn M. Reeves
  0 siblings, 0 replies; 7+ messages in thread
From: Bryn M. Reeves @ 2007-03-22 21:31 UTC (permalink / raw)
  To: LVM general discussion and development

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tim Milstead wrote:
> I have made the copy using a rescue CD and an external USB drive.
> Fitting the replacement internally is going to be difficult hence the
> question about doing it this way. Are you sure? I was just worried that
> perhaps LVM looked beyond a simple '/dev/hde' referred to drives in some
> deeper way e.g. serial number, model, make etc.

LVM doesn't use the '/dev/hdX' names at all. Instead it scans all block
devices looking for a label that was written to the disk when pvcreate
is run (stored in the 2nd sector).

As long as this label gets copied over the two devices will be
indistinguishable to LVM. If you dd the whole device over, then you're
definitely going to include the label.

You do need to arrange to make the replacement drive inaccessible to LVM
though if it will still be physically attached to the machine.

Either wipe the label out or mask the device using a filter in
/etc/lvm/lvm.conf - see the man page for details of this.

That said, if I was doing this on my own system, I would be using the
pvcreate/vgextend/pvmove steps I outlined further down.

>> To temporarily bring the VG up to 10 disks to allow you to remove the
>> failing member. You should then find the "pvmove /dev/hde" works as
>> expected (assuming the new disk is at least as big as the one you are
>> replacing).
>>   
> Yes, but given the space constraints I'd rather avoid this.

I don't see how the space constraints make a difference here - you're
going to have both devices attached at the same time however you decide
to do this.

Once the pvmove completes (analogous to the dd in the other method), you
can run "vgreduce /dev/hde" to permanently remove the failing drive from
the volume group, bringing you back down to 9 disks.

Kind regards,

Bryn.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFGAvXG6YSQoMYUY94RAjZ2AJ9XJhR/gif4JDKblFGoxxKVtUT8egCgjOdr
XQIwn9vsSnl/NMK59Cj3ybo=
=rz4H
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] failing hard drive
  2007-03-22 15:36 [linux-lvm] failing hard drive Tim Milstead
  2007-03-22 15:54 ` Bryn M. Reeves
@ 2007-03-22 22:33 ` Lamont Peterson
  2007-03-23  1:32   ` Stuart D. Gathman
  1 sibling, 1 reply; 7+ messages in thread
From: Lamont Peterson @ 2007-03-22 22:33 UTC (permalink / raw)
  To: LVM

[-- Attachment #1: Type: text/plain, Size: 1654 bytes --]

On Thursday 22 March 2007 09:36am, Tim Milstead wrote:
> I am a complete noob to lvm.
> I have 9 partitions on 9 (190GB) disks, one of which (according to smart
> tools) is failing - hde.

I'm curious, it sounds like you are not using RAID, but instead have 9 PVs in 
your LVM setup.

If one hard drive will fail in 5,000 hours of use, then 1 in 10 will fail in 
500 hours of use, or so the thinking goes.  With 9 drives, I'm not surprised 
that you are experiencing one failing.

If you're not using RAID, might I suggest that you do?  Those 9 disks would 
make a nice RAID5 or RAID6 array, and LVM works beautifully on top of 
software RAID, hardware RAID or any combination of both.

LVM does not provide redundancy (yes, I know it can do mirroring, but I 
wouldn't suggest that), it's about easily managing lots of storage space.  
RAID is about reliability/redundancy.  Use the right tool for the right job, 
and use both tools together to get all the best benefits of both.
-- 
Lamont Peterson <peregrine@OpenBrainstem.net>
Founder [ http://blog.OpenBrainstem.net/peregrine/ ]
GPG Key fingerprint: 0E35 93C5 4249 49F0 EC7B  4DDD BE46 4732 6460 CCB5
  ___                   ____            _           _
 / _ \ _ __   ___ _ __ | __ ) _ __ __ _(_)_ __  ___| |_ ___ _ __ ___
| | | | '_ \ / _ \ '_ \|  _ \| '__/ _` | | '_ \/ __| __/ _ \ '_ ` _ \
| |_| | |_) |  __/ | | | |_) | | | (_| | | | | \__ \ ||  __/ | | | | |
 \___/| .__/ \___|_| |_|____/|_|  \__,_|_|_| |_|___/\__\___|_| |_| |_|
      |_|               Intelligent Open Source Software Engineering
                              [ http://www.OpenBrainstem.net/ ]

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] failing hard drive
  2007-03-22 22:33 ` Lamont Peterson
@ 2007-03-23  1:32   ` Stuart D. Gathman
  2007-03-26  9:18     ` Tim Milstead
  0 siblings, 1 reply; 7+ messages in thread
From: Stuart D. Gathman @ 2007-03-23  1:32 UTC (permalink / raw)
  To: LVM general discussion and development

On Thu, 22 Mar 2007, Lamont Peterson wrote:

> If you're not using RAID, might I suggest that you do?  Those 9 disks would 
> make a nice RAID5 or RAID6 array, and LVM works beautifully on top of 
> software RAID, hardware RAID or any combination of both.
> 
> LVM does not provide redundancy (yes, I know it can do mirroring, but I 
> wouldn't suggest that), it's about easily managing lots of storage space.  
> RAID is about reliability/redundancy.  Use the right tool for the right job, 
> and use both tools together to get all the best benefits of both.

In my use, reliability is paramount.  So I prefer RAID1.  You get 
a performance boost for reads as well.  The md based mirrors can be
mounted separately (careful!) if needed for data recovery and bug workarounds.
(E.g. On Centos-3 grub won't install on mirrored boot partition.  So 
unmount md0, setfaulty hdb1, mount hda1 /boot, grub-install, umount hda1,
mount md0, raidhotadd hdb1.)  RAID5 arrays can only be accessed as raid.

-- 
	      Stuart D. Gathman <stuart@bmsi.com>
    Business Management Systems Inc.  Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] failing hard drive
  2007-03-23  1:32   ` Stuart D. Gathman
@ 2007-03-26  9:18     ` Tim Milstead
  0 siblings, 0 replies; 7+ messages in thread
From: Tim Milstead @ 2007-03-26  9:18 UTC (permalink / raw)
  To: LVM general discussion and development

Stuart D. Gathman wrote:
> On Thu, 22 Mar 2007, Lamont Peterson wrote:
>
>   
>> If you're not using RAID, might I suggest that you do?  Those 9 disks would 
>> make a nice RAID5 or RAID6 array, and LVM works beautifully on top of 
>> software RAID, hardware RAID or any combination of both.
>>
>> LVM does not provide redundancy (yes, I know it can do mirroring, but I 
>> wouldn't suggest that), it's about easily managing lots of storage space.  
>> RAID is about reliability/redundancy.  Use the right tool for the right job, 
>> and use both tools together to get all the best benefits of both.
>>     
>
> In my use, reliability is paramount.  So I prefer RAID1.  You get 
> a performance boost for reads as well.  The md based mirrors can be
> mounted separately (careful!) if needed for data recovery and bug workarounds.
> (E.g. On Centos-3 grub won't install on mirrored boot partition.  So 
> unmount md0, setfaulty hdb1, mount hda1 /boot, grub-install, umount hda1,
> mount md0, raidhotadd hdb1.)  RAID5 arrays can only be accessed as raid.
>
>   
Well I changed the drive and it seems to work although dd appears to 
have copied the error across beacuse smartd reports

Device: /dev/hde, 1 Currently unreadable (pending) sectors.

I guess this would be expected? There are no heat issues.

You are correct about the redundancy issue - it's not my choice. I think 
space was paramount when the machine was built.

Thanks for all the help,

Tim.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2007-03-26  9:18 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-03-22 15:36 [linux-lvm] failing hard drive Tim Milstead
2007-03-22 15:54 ` Bryn M. Reeves
2007-03-22 20:58   ` Tim Milstead
2007-03-22 21:31     ` Bryn M. Reeves
2007-03-22 22:33 ` Lamont Peterson
2007-03-23  1:32   ` Stuart D. Gathman
2007-03-26  9:18     ` Tim Milstead

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).