* [linux-lvm] HDD Failure
@ 2006-09-18 18:10 Nick
2006-09-18 19:08 ` Mitch Miller
0 siblings, 1 reply; 34+ messages in thread
From: Nick @ 2006-09-18 18:10 UTC (permalink / raw)
To: linux-lvm
Hello List,
I have a 3x 120GB HDD that are all lumped into one LVM volume. Recently,
one died so I removed it using.
# vgreduce --removemissing Vol1
This worked fine but now I can't mount the volume group:
root@nibiru:~# cat /etc/fstab | grep Vol
/dev/mapper/Vol1-share /share ext3 defaults 0 2
root@nibiru:~# mount /share
mount: special device /dev/mapper/Vol1-share does not exist
root@nibiru:~#
I've no idea why it says it doesn't exist as I thought all I did was
remove missing PE's from a VG - not remove the actual VG!
Can anyone help me get my data back? Thanks for any advice.
Nick
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] HDD Failure
2006-09-18 18:10 [linux-lvm] HDD Failure Nick
@ 2006-09-18 19:08 ` Mitch Miller
2006-09-18 19:13 ` Nick
0 siblings, 1 reply; 34+ messages in thread
From: Mitch Miller @ 2006-09-18 19:08 UTC (permalink / raw)
To: LVM general discussion and development
Nick -- if I understand your situation properly, you effectively took
one of the platters (the bad drive) out of the drive (the vg) and now
you still want to be able to access the entire drive ... ?? I don't
think that's going to happen.
Off to the backups, I'd say ...
-- Mitch
Nick wrote:
> Hello List,
>
> I have a 3x 120GB HDD that are all lumped into one LVM volume. Recently,
> one died so I removed it using.
>
> # vgreduce --removemissing Vol1
>
> This worked fine but now I can't mount the volume group:
>
> root@nibiru:~# cat /etc/fstab | grep Vol
> /dev/mapper/Vol1-share /share ext3 defaults 0 2
> root@nibiru:~# mount /share
> mount: special device /dev/mapper/Vol1-share does not exist
> root@nibiru:~#
>
> I've no idea why it says it doesn't exist as I thought all I did was
> remove missing PE's from a VG - not remove the actual VG!
>
> Can anyone help me get my data back? Thanks for any advice.
>
> Nick
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] HDD Failure
2006-09-18 19:08 ` Mitch Miller
@ 2006-09-18 19:13 ` Nick
2006-09-18 19:23 ` Fabien Jakimowicz
0 siblings, 1 reply; 34+ messages in thread
From: Nick @ 2006-09-18 19:13 UTC (permalink / raw)
To: LVM general discussion and development
Hi Mitch,
Thanks for the response. I understand I won't be able to get all the
data back but I have 2x120GB "good" drives left and want to get what's
left - is this possible?
Thanks, Nick
On Mon, 2006-09-18 at 14:08 -0500, Mitch Miller wrote:
> Nick -- if I understand your situation properly, you effectively took
> one of the platters (the bad drive) out of the drive (the vg) and now
> you still want to be able to access the entire drive ... ?? I don't
> think that's going to happen.
>
> Off to the backups, I'd say ...
>
> -- Mitch
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] HDD Failure
2006-09-18 19:13 ` Nick
@ 2006-09-18 19:23 ` Fabien Jakimowicz
2006-09-18 19:34 ` Nick
0 siblings, 1 reply; 34+ messages in thread
From: Fabien Jakimowicz @ 2006-09-18 19:23 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 820 bytes --]
On Mon, 2006-09-18 at 20:13 +0100, Nick wrote:
> Hi Mitch,
>
> Thanks for the response. I understand I won't be able to get all the
> data back but I have 2x120GB "good" drives left and want to get what's
> left - is this possible?
>
> Thanks, Nick
>
> On Mon, 2006-09-18 at 14:08 -0500, Mitch Miller wrote:
> > Nick -- if I understand your situation properly, you effectively took
> > one of the platters (the bad drive) out of the drive (the vg) and now
> > you still want to be able to access the entire drive ... ?? I don't
> > think that's going to happen.
> >
> > Off to the backups, I'd say ...
> >
> > -- Mitch
>
did you have only one lv in your vg ?
if yes, you can go to your backups.
what does pvdisplay and vgdisplay says ?
--
Fabien Jakimowicz <fabien@jakimowicz.com>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] HDD Failure
2006-09-18 19:23 ` Fabien Jakimowicz
@ 2006-09-18 19:34 ` Nick
2006-09-18 19:37 ` Mark Krenz
2006-09-18 19:42 ` [linux-lvm] HDD Failure Fabien Jakimowicz
0 siblings, 2 replies; 34+ messages in thread
From: Nick @ 2006-09-18 19:34 UTC (permalink / raw)
To: LVM general discussion and development
Hi Fabien,
Yes, just one LV - "Vol1-share".
Does this mean I've lost *everything*? I would have though I should be
still be able to access everything on the two working disks?
I don't have a backup of this data.
Thanks, Nick
root@nibiru:~# pvdisplay
--- Physical volume ---
PV Name /dev/hda4
VG Name Vol1
PV Size 106.79 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 27339
Free PE 27339
Allocated PE 0
PV UUID Cq9xKF-W33m-BCLt-YyIc-EEfm-Btqc-eZLHNh
--- Physical volume ---
PV Name /dev/hdb1
VG Name Vol1
PV Size 111.75 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 28609
Free PE 28609
Allocated PE 0
PV UUID hwQrhH-iXHO-Bots-6zUQ-w8JG-Nmb3-shqZiX
root@nibiru:~# vgdisplay
--- Volume group ---
VG Name Vol1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 218.55 GB
PE Size 4.00 MB
Total PE 55948
Alloc PE / Size 0 / 0
Free PE / Size 55948 / 218.55 GB
VG UUID RORj4f-LAOJ-83YS-34lD-4YRM-FKP8-8hgXLg
On Mon, 2006-09-18 at 21:23 +0200, Fabien Jakimowicz wrote:
> did you have only one lv in your vg ?
>
> if yes, you can go to your backups.
>
> what does pvdisplay and vgdisplay says ?
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] HDD Failure
2006-09-18 19:34 ` Nick
@ 2006-09-18 19:37 ` Mark Krenz
2006-09-19 22:40 ` [linux-lvm] Misleading documentation (was: HDD Failure) Scott Lamb
2006-09-18 19:42 ` [linux-lvm] HDD Failure Fabien Jakimowicz
1 sibling, 1 reply; 34+ messages in thread
From: Mark Krenz @ 2006-09-18 19:37 UTC (permalink / raw)
To: LVM general discussion and development
Nick,
LVM != RAID
You should have been doing RAID if you wanted to be able to handle the
failure of one drive.
On Mon, Sep 18, 2006 at 07:34:37PM GMT, Nick [lists@mogmail.net] said the following:
> Hi Fabien,
>
> Yes, just one LV - "Vol1-share".
>
> Does this mean I've lost *everything*? I would have though I should be
> still be able to access everything on the two working disks?
>
> I don't have a backup of this data.
>
> Thanks, Nick
>
> root@nibiru:~# pvdisplay
> --- Physical volume ---
> PV Name /dev/hda4
> VG Name Vol1
> PV Size 106.79 GB / not usable 0
> Allocatable yes
> PE Size (KByte) 4096
> Total PE 27339
> Free PE 27339
> Allocated PE 0
> PV UUID Cq9xKF-W33m-BCLt-YyIc-EEfm-Btqc-eZLHNh
>
> --- Physical volume ---
> PV Name /dev/hdb1
> VG Name Vol1
> PV Size 111.75 GB / not usable 0
> Allocatable yes
> PE Size (KByte) 4096
> Total PE 28609
> Free PE 28609
> Allocated PE 0
> PV UUID hwQrhH-iXHO-Bots-6zUQ-w8JG-Nmb3-shqZiX
>
> root@nibiru:~# vgdisplay
> --- Volume group ---
> VG Name Vol1
> System ID
> Format lvm2
> Metadata Areas 2
> Metadata Sequence No 3
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 0
> Open LV 0
> Max PV 0
> Cur PV 2
> Act PV 2
> VG Size 218.55 GB
> PE Size 4.00 MB
> Total PE 55948
> Alloc PE / Size 0 / 0
> Free PE / Size 55948 / 218.55 GB
> VG UUID RORj4f-LAOJ-83YS-34lD-4YRM-FKP8-8hgXLg
>
>
> On Mon, 2006-09-18 at 21:23 +0200, Fabien Jakimowicz wrote:
> > did you have only one lv in your vg ?
> >
> > if yes, you can go to your backups.
> >
> > what does pvdisplay and vgdisplay says ?
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm@redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
--
Mark S. Krenz
IT Director
Suso Technology Services, Inc.
http://suso.org/
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] HDD Failure
2006-09-18 19:34 ` Nick
2006-09-18 19:37 ` Mark Krenz
@ 2006-09-18 19:42 ` Fabien Jakimowicz
2006-09-18 19:52 ` Nick
1 sibling, 1 reply; 34+ messages in thread
From: Fabien Jakimowicz @ 2006-09-18 19:42 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 2793 bytes --]
On Mon, 2006-09-18 at 20:34 +0100, Nick wrote:
> Hi Fabien,
>
> Yes, just one LV - "Vol1-share".
>
> Does this mean I've lost *everything*? I would have though I should be
> still be able to access everything on the two working disks?
>
> I don't have a backup of this data.
I'm sure you will now have at least one backup of all your data.
>
> Thanks, Nick
>
> root@nibiru:~# pvdisplay
> --- Physical volume ---
> PV Name /dev/hda4
> VG Name Vol1
> PV Size 106.79 GB / not usable 0
> Allocatable yes
> PE Size (KByte) 4096
> Total PE 27339
> Free PE 27339
> Allocated PE 0
> PV UUID Cq9xKF-W33m-BCLt-YyIc-EEfm-Btqc-eZLHNh
>
> --- Physical volume ---
> PV Name /dev/hdb1
> VG Name Vol1
> PV Size 111.75 GB / not usable 0
> Allocatable yes
> PE Size (KByte) 4096
> Total PE 28609
> Free PE 28609
> Allocated PE 0
> PV UUID hwQrhH-iXHO-Bots-6zUQ-w8JG-Nmb3-shqZiX
>
> root@nibiru:~# vgdisplay
> --- Volume group ---
> VG Name Vol1
> System ID
> Format lvm2
> Metadata Areas 2
> Metadata Sequence No 3
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 0
> Open LV 0
> Max PV 0
> Cur PV 2
> Act PV 2
> VG Size 218.55 GB
> PE Size 4.00 MB
> Total PE 55948
> Alloc PE / Size 0 / 0
> Free PE / Size 55948 / 218.55 GB
> VG UUID RORj4f-LAOJ-83YS-34lD-4YRM-FKP8-8hgXLg
As you can see, you have no remaining LV in this VG. And both of your PV
are empty (Total == Free).
Now if you read vgreduce manual page :
--removemissing
Removes all missing physical volumes from the volume
group and makes the volume group consistent again.
It's a good idea to run this option with --test first to
find out what it would remove before running it for real.
Any logical volumes and dependent snapshots that were
partly on the missing disks get removed completely. This includes
those parts that lie on disks that are still present.
If your logical volumes spanned several disks including
the ones that are lost, you might want to try to salvage data first
by activating your logical volumes with --partial as described in
lvm (8).
you can see that you've just lost all of your data.
--
Fabien Jakimowicz <fabien@jakimowicz.com>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] HDD Failure
2006-09-18 19:42 ` [linux-lvm] HDD Failure Fabien Jakimowicz
@ 2006-09-18 19:52 ` Nick
2006-09-18 19:57 ` Fabien Jakimowicz
0 siblings, 1 reply; 34+ messages in thread
From: Nick @ 2006-09-18 19:52 UTC (permalink / raw)
To: LVM general discussion and development
Oh well, better setup RAID this time!! Thanks for your (And the others
who replied) help, I appreciate it even though it's bad news.
Nick
On Mon, 2006-09-18 at 21:42 +0200, Fabien Jakimowicz wrote:
> you can see that you've just lost all of your data.
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] HDD Failure
2006-09-18 19:52 ` Nick
@ 2006-09-18 19:57 ` Fabien Jakimowicz
2006-09-19 21:53 ` [linux-lvm] LVM on RAID Alexander Lazarevich
0 siblings, 1 reply; 34+ messages in thread
From: Fabien Jakimowicz @ 2006-09-18 19:57 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 385 bytes --]
On Mon, 2006-09-18 at 20:52 +0100, Nick wrote:
> Oh well, better setup RAID this time!! Thanks for your (And the others
> who replied) help, I appreciate it even though it's bad news.
>
> Nick
You can also run a lvm over raid configuration, but you should really
test it before using it, just to exactly know what you are doing.
--
Fabien Jakimowicz <fabien@jakimowicz.com>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* [linux-lvm] LVM on RAID
2006-09-18 19:57 ` Fabien Jakimowicz
@ 2006-09-19 21:53 ` Alexander Lazarevich
2006-09-19 22:03 ` Roger Lucas
` (4 more replies)
0 siblings, 5 replies; 34+ messages in thread
From: Alexander Lazarevich @ 2006-09-19 21:53 UTC (permalink / raw)
To: LVM general discussion and development
We have several RAID devices (16-24 drive Fiber/SCSI attached RAID) which
are currently single devices on our 64bit linux servers (RHEL-4, core5).
We are considering LVM'ing 2 or more of the RAID's into a LVM group. I
don't doubt the reliability and robustness of LVM2 on single drives, but I
worry about it on top of RAID devices.
Does anyone have any experience with LVM on to of RAID volumes, positive
or negative?
Thanks,
Alex
^ permalink raw reply [flat|nested] 34+ messages in thread
* RE: [linux-lvm] LVM on RAID
2006-09-19 21:53 ` [linux-lvm] LVM on RAID Alexander Lazarevich
@ 2006-09-19 22:03 ` Roger Lucas
2006-09-24 17:19 ` Nix
2006-09-19 22:04 ` Mark Krenz
` (3 subsequent siblings)
4 siblings, 1 reply; 34+ messages in thread
From: Roger Lucas @ 2006-09-19 22:03 UTC (permalink / raw)
To: 'LVM general discussion and development'
I've run LVM on top of hardware raid (Areca SATA system with 8 drives in
RAID-6) as well as LVM on top of software RAID-1 and RAID-5 (both IDE and
SATA). LVM has been no less reliable in this configuration than in any
other. IMHO, running LVM over RAID makes absolute sense. My preferred
installation on a Linux box is to run s/w RAID-1 or RAID-5 with LVM on top
and boot directly into an LVM root partition with initrd. I put a small
(64MB) boot partition on each drive in the RAID array with an identical boot
image and then any drive can boot into the kernel + initrd and start the
RAID+LVM+root. I've never tried it with more sophisticated RAIDs such as
your Fibre or SCSI
As always, your mileage may vary, but [IDE|SATA]+[HW|SW]RAID+LVM has worked
very well for me.
- Roger
> -----Original Message-----
> From: linux-lvm-bounces@redhat.com [mailto:linux-lvm-bounces@redhat.com]
> On Behalf Of Alexander Lazarevich
> Sent: 19 September 2006 22:54
> To: LVM general discussion and development
> Subject: [linux-lvm] LVM on RAID
>
> We have several RAID devices (16-24 drive Fiber/SCSI attached RAID) which
> are currently single devices on our 64bit linux servers (RHEL-4, core5).
> We are considering LVM'ing 2 or more of the RAID's into a LVM group. I
> don't doubt the reliability and robustness of LVM2 on single drives, but I
> worry about it on top of RAID devices.
>
> Does anyone have any experience with LVM on to of RAID volumes, positive
> or negative?
>
> Thanks,
>
> Alex
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] LVM on RAID
2006-09-19 21:53 ` [linux-lvm] LVM on RAID Alexander Lazarevich
2006-09-19 22:03 ` Roger Lucas
@ 2006-09-19 22:04 ` Mark Krenz
2006-09-19 22:11 ` Michael Loftis
` (2 subsequent siblings)
4 siblings, 0 replies; 34+ messages in thread
From: Mark Krenz @ 2006-09-19 22:04 UTC (permalink / raw)
To: LVM general discussion and development
I couldn't imagine using LVM on a server without it being on RAID.
Although I've only been using LVM for less than a year, I've installed
it ontop of hard and software RAID-1 on several different servers
including on Xen virtual machines and it has all been fine.
Just make sure that you don't expect LVM to keep your data if you lose
a drive and are not doing RAID underneath. ;-)
On Tue, Sep 19, 2006 at 09:53:58PM GMT, Alexander Lazarevich [alazarev@itg.uiuc.edu] said the following:
> We have several RAID devices (16-24 drive Fiber/SCSI attached RAID) which
> are currently single devices on our 64bit linux servers (RHEL-4, core5).
> We are considering LVM'ing 2 or more of the RAID's into a LVM group. I
> don't doubt the reliability and robustness of LVM2 on single drives, but I
> worry about it on top of RAID devices.
>
> Does anyone have any experience with LVM on to of RAID volumes, positive
> or negative?
>
> Thanks,
>
> Alex
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
--
Mark S. Krenz
IT Director
Suso Technology Services, Inc.
http://suso.org/
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] LVM on RAID
2006-09-19 21:53 ` [linux-lvm] LVM on RAID Alexander Lazarevich
2006-09-19 22:03 ` Roger Lucas
2006-09-19 22:04 ` Mark Krenz
@ 2006-09-19 22:11 ` Michael Loftis
2006-09-20 0:30 ` Fabien Jakimowicz
2006-09-20 13:55 ` Matthew B. Brookover
4 siblings, 0 replies; 34+ messages in thread
From: Michael Loftis @ 2006-09-19 22:11 UTC (permalink / raw)
To: LVM general discussion and development
--On September 19, 2006 4:53:58 PM -0500 Alexander Lazarevich
<alazarev@itg.uiuc.edu> wrote:
> We have several RAID devices (16-24 drive Fiber/SCSI attached RAID) which
> are currently single devices on our 64bit linux servers (RHEL-4, core5).
> We are considering LVM'ing 2 or more of the RAID's into a LVM group. I
> don't doubt the reliability and robustness of LVM2 on single drives, but
> I worry about it on top of RAID devices.
>
> Does anyone have any experience with LVM on to of RAID volumes, positive
> or negative?
This is nearly the exact scenario we use. Works great. LVM is indifferent
to the backend block device.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation (was: HDD Failure)
2006-09-18 19:37 ` Mark Krenz
@ 2006-09-19 22:40 ` Scott Lamb
2006-09-20 0:35 ` Fabien Jakimowicz
` (3 more replies)
0 siblings, 4 replies; 34+ messages in thread
From: Scott Lamb @ 2006-09-19 22:40 UTC (permalink / raw)
To: LVM general discussion and development
On Sep 18, 2006, at 12:37 PM, Mark Krenz wrote:
> LVM != RAID
>
> You should have been doing RAID if you wanted to be able to
> handle the
> failure of one drive.
This is my biggest beef with LVM - why doesn't *any* of the
documentation point this out? There are very few good reasons to use
LVM without RAID, and "ignorance" certainly isn't among them. I don't
see any mention of RAID or disk failures in the manual pages or in
the HOWTO.
For example, the recipes chapter [1] of the HOWTO shows a non-trivial
setup with four volume groups split across seven physical drives.
There's no mention of RAID. This is a ridiculously bad idea - if
*any* of those seven drives are lost, at least one volume group will
fail. In some cases, more than one. This document should be showing
best practices, and it's instead showing how to throw away your data.
The "lvcreate" manual page is pretty bad, too. It mentions the
ability to tune stripe size, which on casual read, might suggest that
it uses real RAID. Instead, I think this is just RAID-0.
[1] - http://tldp.org/HOWTO/LVM-HOWTO/recipeadddisk.html
--
Scott Lamb <http://www.slamb.org/>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] LVM on RAID
2006-09-19 21:53 ` [linux-lvm] LVM on RAID Alexander Lazarevich
` (2 preceding siblings ...)
2006-09-19 22:11 ` Michael Loftis
@ 2006-09-20 0:30 ` Fabien Jakimowicz
2006-09-20 13:55 ` Matthew B. Brookover
4 siblings, 0 replies; 34+ messages in thread
From: Fabien Jakimowicz @ 2006-09-20 0:30 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 872 bytes --]
On Tue, 2006-09-19 at 16:53 -0500, Alexander Lazarevich wrote:
> We have several RAID devices (16-24 drive Fiber/SCSI attached RAID) which
> are currently single devices on our 64bit linux servers (RHEL-4, core5).
> We are considering LVM'ing 2 or more of the RAID's into a LVM group. I
> don't doubt the reliability and robustness of LVM2 on single drives, but I
> worry about it on top of RAID devices.
>
> Does anyone have any experience with LVM on to of RAID volumes, positive
> or negative?
>
> Thanks,
>
> Alex
Works perfectly with severall machines, maybe having more interactions
between raid and lvm could be a good thing (for example, while creating
a new LV or extending existing one on mixed raid devices (degrated and
not), it could be better if lvm don't use degrated devices first).
--
Fabien Jakimowicz <fabien@jakimowicz.com>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation (was: HDD Failure)
2006-09-19 22:40 ` [linux-lvm] Misleading documentation (was: HDD Failure) Scott Lamb
@ 2006-09-20 0:35 ` Fabien Jakimowicz
2006-09-20 3:22 ` Mark Krenz
` (2 subsequent siblings)
3 siblings, 0 replies; 34+ messages in thread
From: Fabien Jakimowicz @ 2006-09-20 0:35 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 1694 bytes --]
On Tue, 2006-09-19 at 15:40 -0700, Scott Lamb wrote:
> On Sep 18, 2006, at 12:37 PM, Mark Krenz wrote:
> > LVM != RAID
> >
> > You should have been doing RAID if you wanted to be able to
> > handle the
> > failure of one drive.
>
> This is my biggest beef with LVM - why doesn't *any* of the
> documentation point this out? There are very few good reasons to use
> LVM without RAID, and "ignorance" certainly isn't among them. I don't
> see any mention of RAID or disk failures in the manual pages or in
> the HOWTO.
>
> For example, the recipes chapter [1] of the HOWTO shows a non-trivial
> setup with four volume groups split across seven physical drives.
> There's no mention of RAID. This is a ridiculously bad idea - if
> *any* of those seven drives are lost, at least one volume group will
> fail. In some cases, more than one. This document should be showing
> best practices, and it's instead showing how to throw away your data.
>
> The "lvcreate" manual page is pretty bad, too. It mentions the
> ability to tune stripe size, which on casual read, might suggest that
> it uses real RAID. Instead, I think this is just RAID-0.
>
> [1] - http://tldp.org/HOWTO/LVM-HOWTO/recipeadddisk.html
>
LVM is just a Logical Volume Manager. It sounds like it add a layer
between physical devices and logical volumes. A such layer can take care
of physical devices failure, but lvm does not. Maybe you are right and
it should be pointed out. Maybe not ...
Having a half block device make no sense because there are actually no
filesystem capable of recovering this kind of failure.
--
Fabien Jakimowicz <fabien@jakimowicz.com>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation (was: HDD Failure)
2006-09-19 22:40 ` [linux-lvm] Misleading documentation (was: HDD Failure) Scott Lamb
2006-09-20 0:35 ` Fabien Jakimowicz
@ 2006-09-20 3:22 ` Mark Krenz
2006-09-20 11:27 ` Fabien Jakimowicz
2006-09-20 10:50 ` Morten Torstensen
2006-09-20 13:22 ` [linux-lvm] Misleading documentation (was: HDD Failure) Tobias Bluhm
3 siblings, 1 reply; 34+ messages in thread
From: Mark Krenz @ 2006-09-20 3:22 UTC (permalink / raw)
To: LVM general discussion and development
Personally I like it when documentation is kept simple and uses simple
examples. There is nothing worse than when you are trying to learn
something and it tells you how to how to intantiate a variable, and then
it immediately goes on to show you how to make some complicated reference
to it using some code.
I agree with you though, its probably a good idea to steer newcomers
in the right direction on disk management and a few notes about doing
LVM ontop of RAID being a good idea couldn't hurt. This is especially
so since I've heard three mentions of people using LVM on a server
without doing RAID this week alone. :-/
I suppose it could also be said that people who are causually doing
LVM on their systems using something like a GUI are most likely not
going to be referencing the man pages or LVM documentation until after
their system is setup, at which point it is probably too late to put the
physical volumes on a RAID array. So I think its more the
responsibility of the GUI/ncurses installer to alert you to be using
RAID.
Mark
On Tue, Sep 19, 2006 at 10:40:43PM GMT, Scott Lamb [slamb@slamb.org] said the following:
> On Sep 18, 2006, at 12:37 PM, Mark Krenz wrote:
> > LVM != RAID
> >
> > You should have been doing RAID if you wanted to be able to
> >handle the
> >failure of one drive.
>
> This is my biggest beef with LVM - why doesn't *any* of the
> documentation point this out? There are very few good reasons to use
> LVM without RAID, and "ignorance" certainly isn't among them. I don't
> see any mention of RAID or disk failures in the manual pages or in
> the HOWTO.
>
> For example, the recipes chapter [1] of the HOWTO shows a non-trivial
> setup with four volume groups split across seven physical drives.
> There's no mention of RAID. This is a ridiculously bad idea - if
> *any* of those seven drives are lost, at least one volume group will
> fail. In some cases, more than one. This document should be showing
> best practices, and it's instead showing how to throw away your data.
>
> The "lvcreate" manual page is pretty bad, too. It mentions the
> ability to tune stripe size, which on casual read, might suggest that
> it uses real RAID. Instead, I think this is just RAID-0.
>
> [1] - http://tldp.org/HOWTO/LVM-HOWTO/recipeadddisk.html
>
> --
> Scott Lamb <http://www.slamb.org/>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
--
Mark S. Krenz
IT Director
Suso Technology Services, Inc.
http://suso.org/
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation
2006-09-19 22:40 ` [linux-lvm] Misleading documentation (was: HDD Failure) Scott Lamb
2006-09-20 0:35 ` Fabien Jakimowicz
2006-09-20 3:22 ` Mark Krenz
@ 2006-09-20 10:50 ` Morten Torstensen
2006-09-20 11:40 ` Heinz Mauelshagen
2006-09-20 13:22 ` [linux-lvm] Misleading documentation (was: HDD Failure) Tobias Bluhm
3 siblings, 1 reply; 34+ messages in thread
From: Morten Torstensen @ 2006-09-20 10:50 UTC (permalink / raw)
To: LVM general discussion and development
Scott Lamb wrote:
> This is my biggest beef with LVM - why doesn't *any* of the
> documentation point this out? There are very few good reasons to use LVM
> without RAID, and "ignorance" certainly isn't among them. I don't see
> any mention of RAID or disk failures in the manual pages or in the HOWTO.
Mirroring should be dealt with in LVM itself, by mapping more PEs on physical
seperate PVs to each LE. From the little information I've seen about the new LVM
mirroring, it seems that the mirrors are defined on LV level instead of LE/PE.
But after some JFGI sessions nothing really enlightening has shown up.
--
//Morten Torstensen
//Email: morten@mortent.org
//IM: Cartoon@jabber.no morten.torstensen@gmail.com
And if it turns out that there is a God, I don't believe that he is evil.
The worst that can be said is that he's an underachiever.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation (was: HDD Failure)
2006-09-20 3:22 ` Mark Krenz
@ 2006-09-20 11:27 ` Fabien Jakimowicz
2006-09-20 19:45 ` [linux-lvm] Misleading documentation Barnaby Claydon
0 siblings, 1 reply; 34+ messages in thread
From: Fabien Jakimowicz @ 2006-09-20 11:27 UTC (permalink / raw)
To: LVM general discussion and development, aj
[-- Attachment #1: Type: text/plain, Size: 7588 bytes --]
On Wed, 2006-09-20 at 03:22 +0000, Mark Krenz wrote:
> Personally I like it when documentation is kept simple and uses simple
> examples. There is nothing worse than when you are trying to learn
> something and it tells you how to how to intantiate a variable, and then
> it immediately goes on to show you how to make some complicated reference
> to it using some code.
>
> I agree with you though, its probably a good idea to steer newcomers
> in the right direction on disk management and a few notes about doing
> LVM ontop of RAID being a good idea couldn't hurt. This is especially
> so since I've heard three mentions of people using LVM on a server
> without doing RAID this week alone. :-/
We should add something in faq page
( http://tldp.org/HOWTO/LVM-HOWTO/lvm2faq.html ), like "i've lost one of
my hard drive and i can't mount my lv, did i lost everything ?" followed
by a quick explanation : lvm is NOT faulty tolerant like raid1/5, if you
lose a PV, you lose every LV which was (even partially) on on it.
Adding a recipe with raid can't kill us.
A. Setting up LVM over software RAID on four disks
For this recipe, the setup has four disks that will be put into two raid
arrays which will be used as PV.
The main goal of this configuration is to avoid any data loss if one of
the hard drives fails.
A.1 RAID
A.1.1 Preparing the disks
You must partition your disks and set partition type to Linux raid
autodetect (FD type), if your system can handle it. I recommand to make
only one partition per hard drive. You can use cfdisk to do it.
# cfdisk /dev/sda
If your drives are identicals, you can save time using sfdisk :
# sfdisk -d /dev/sda | sfdisk /dev/sdb
# sfdisk -d /dev/sda | sfdisk /dev/sdc
# sfdisk -d /dev/sda | sfdisk /dev/sdd
This will partition sdb, sdc and sdd using sda partition table scheme.
A.1.2 Creating arrays
You can check if your system can handle raid1 by typing the following :
# cat /proc/mdstat
Personalities : [raid1]
If not (file not found, no raid1 in list), then load raid1 module :
# modprobe raid1
You can now create raid arrays, assuming you have only one raid partiton
per drive :
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1
You should wait for the raid arrays to be fully synchronized, check it
with :
# cat /proc/mdstat
A.2 LVM
A.2.1 Create Physical Volumes
Run pvcreate on each raid array :
# pvcreate /dev/md0
# pvcreate /dev/md1
This creates a volume group descriptor area (VGDA) at the start of the
raid arrays.
A.2.2 Setup a Volume Group
# vgcreate my_volume_group /dev/md0 /dev/md1
You should now see something like that :
# vgdisplay
--- Volume group ---
VG Name my_volume_group
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 37
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 1.18 TB
PE Size 4.00 MB
Total PE 310014
Alloc PE / Size 0 / 0 TB
Free PE / Size 310014 / 1.18 TB
VG UUID LI3k9v-MnIA-lfY6-kdAB-nmpW-adjX-A5yKiF
You should check if 'VG Size' matchs your hard drive size (raid1 divide
available space by two, so if you have four 300Go hard drives, you will
have a ~600Go VG).
A.2.3 Create Logical Volumes
You can now create some LV on your VG :
# lvcreate -L10G -nmy_logical_volume my_volume_group
Logical volume "my_logical_volume" created
# lvcreate -L42G -nmy_cool_lv my_volume_group
Logical volume "my_cool_lv" created
A.2.4 Create the File System
Create an XFS file system on each logical volume :
# mkfs.xfs /dev/my_volume_group/my_logical_volume
meta-data=/dev/my_volume_group/my_logical_volume isize=256
agcount=16, agsize=163840 blks
= sectsz=512
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal log bsize=4096 blocks=2560, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
# mkfs.xfs /dev/my_volume_group/my_cool_lv
meta-data=/dev/my_volume_group/my_cool_lv isize=256 agcount=16,
agsize=688128 blks
= sectsz=512
data = bsize=4096 blocks=11010048,
imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal log bsize=4096 blocks=5376, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
A.2.5 Test File System
Mount logical volumes and check if everything is fine :
# mkdir -p /mnt/{my_logical_volume,my_cool_lv}
# mount /dev/my_volume_group/my_logical_volume /mnt/my_logical_volume
# mount /dev/my_volume_group/my_cool_lv /mnt/my_cool_lv
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda1 1.9G 78M 1.8G 5% /
/dev/mapper/my_volume_group-my_logical_volume
10G 0 10G 0% /mnt/my_logical_volume
/dev/mapper/my_volume_group-my_cool_lv
42G 0 42G 0% /mnt/my_cool_lv
>
> I suppose it could also be said that people who are causually doing
> LVM on their systems using something like a GUI are most likely not
> going to be referencing the man pages or LVM documentation until after
> their system is setup, at which point it is probably too late to put the
> physical volumes on a RAID array. So I think its more the
> responsibility of the GUI/ncurses installer to alert you to be using
> RAID.
>
> Mark
>
>
> On Tue, Sep 19, 2006 at 10:40:43PM GMT, Scott Lamb [slamb@slamb.org] said the following:
> > On Sep 18, 2006, at 12:37 PM, Mark Krenz wrote:
> > > LVM != RAID
> > >
> > > You should have been doing RAID if you wanted to be able to
> > >handle the
> > >failure of one drive.
> >
> > This is my biggest beef with LVM - why doesn't *any* of the
> > documentation point this out? There are very few good reasons to use
> > LVM without RAID, and "ignorance" certainly isn't among them. I don't
> > see any mention of RAID or disk failures in the manual pages or in
> > the HOWTO.
> >
> > For example, the recipes chapter [1] of the HOWTO shows a non-trivial
> > setup with four volume groups split across seven physical drives.
> > There's no mention of RAID. This is a ridiculously bad idea - if
> > *any* of those seven drives are lost, at least one volume group will
> > fail. In some cases, more than one. This document should be showing
> > best practices, and it's instead showing how to throw away your data.
> >
> > The "lvcreate" manual page is pretty bad, too. It mentions the
> > ability to tune stripe size, which on casual read, might suggest that
> > it uses real RAID. Instead, I think this is just RAID-0.
> >
> > [1] - http://tldp.org/HOWTO/LVM-HOWTO/recipeadddisk.html
> >
--
Fabien Jakimowicz <fabien@jakimowicz.com>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation
2006-09-20 10:50 ` Morten Torstensen
@ 2006-09-20 11:40 ` Heinz Mauelshagen
2006-09-20 12:41 ` Les Mikesell
0 siblings, 1 reply; 34+ messages in thread
From: Heinz Mauelshagen @ 2006-09-20 11:40 UTC (permalink / raw)
To: morten, LVM general discussion and development
On Wed, Sep 20, 2006 at 12:50:02PM +0200, Morten Torstensen wrote:
> Scott Lamb wrote:
> >This is my biggest beef with LVM - why doesn't *any* of the
> >documentation point this out? There are very few good reasons to use LVM
> >without RAID, and "ignorance" certainly isn't among them. I don't see
> >any mention of RAID or disk failures in the manual pages or in the HOWTO.
>
> Mirroring should be dealt with in LVM itself, by mapping more PEs on
> physical seperate PVs to each LE. From the little information I've seen
> about the new LVM mirroring, it seems that the mirrors are defined on LV
> level instead of LE/PE. But after some JFGI sessions nothing really
> enlightening has shown up.
LVM2 does mirror by grouping multiple (hidden) logical volumes
into a mirror set.
>
> --
>
> //Morten Torstensen
> //Email: morten@mortent.org
> //IM: Cartoon@jabber.no morten.torstensen@gmail.com
>
> And if it turns out that there is a God, I don't believe that he is evil.
> The worst that can be said is that he's an underachiever.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Regards,
Heinz -- The LVM Guy --
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Heinz Mauelshagen Red Hat GmbH
Consulting Development Engineer Am Sonnenhang 11
Storage Development 56242 Marienrachdorf
Germany
Mauelshagen@RedHat.com PHONE +49 171 7803392
FAX +49 2626 924446
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation
2006-09-20 11:40 ` Heinz Mauelshagen
@ 2006-09-20 12:41 ` Les Mikesell
2006-09-21 9:42 ` Heinz Mauelshagen
0 siblings, 1 reply; 34+ messages in thread
From: Les Mikesell @ 2006-09-20 12:41 UTC (permalink / raw)
To: mauelshagen, LVM general discussion and development
On Wed, 2006-09-20 at 06:40, Heinz Mauelshagen wrote:
> > >This is my biggest beef with LVM - why doesn't *any* of the
> > >documentation point this out? There are very few good reasons to use LVM
> > >without RAID, and "ignorance" certainly isn't among them. I don't see
> > >any mention of RAID or disk failures in the manual pages or in the HOWTO.
> >
> > Mirroring should be dealt with in LVM itself, by mapping more PEs on
> > physical seperate PVs to each LE. From the little information I've seen
> > about the new LVM mirroring, it seems that the mirrors are defined on LV
> > level instead of LE/PE. But after some JFGI sessions nothing really
> > enlightening has shown up.
>
> LVM2 does mirror by grouping multiple (hidden) logical volumes
> into a mirror set.
Where would I find example commands to add a mirror, check
the mirroring status, break it, reuse the other half, etc.?
Is it possible to mirror a snapshot instead of the changing
volume?
--
Les Mikesell
lesmikesell@gmail.com
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation (was: HDD Failure)
2006-09-19 22:40 ` [linux-lvm] Misleading documentation (was: HDD Failure) Scott Lamb
` (2 preceding siblings ...)
2006-09-20 10:50 ` Morten Torstensen
@ 2006-09-20 13:22 ` Tobias Bluhm
2006-09-20 23:21 ` Scott Lamb
3 siblings, 1 reply; 34+ messages in thread
From: Tobias Bluhm @ 2006-09-20 13:22 UTC (permalink / raw)
To: LVM general discussion and development
Scott Lamb wrote on 09/19/2006 06:40:43 PM:
> On Sep 18, 2006, at 12:37 PM, Mark Krenz wrote:
> > LVM != RAID
> >
> > You should have been doing RAID if you wanted to be able to
> > handle the
> > failure of one drive.
>
> This is my biggest beef with LVM - why doesn't *any* of the
> documentation point this out? There are very few good reasons to use
> LVM without RAID, and "ignorance" certainly isn't among them. I don't
> see any mention of RAID or disk failures in the manual pages or in
> the HOWTO.
Good point. So if the docs make no mention of RAID or fail-over or
redundency or spares, what made you assume that
LVM had such capabilities????
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] LVM on RAID
2006-09-19 21:53 ` [linux-lvm] LVM on RAID Alexander Lazarevich
` (3 preceding siblings ...)
2006-09-20 0:30 ` Fabien Jakimowicz
@ 2006-09-20 13:55 ` Matthew B. Brookover
2006-09-20 14:01 ` Michael T. Babcock
` (2 more replies)
4 siblings, 3 replies; 34+ messages in thread
From: Matthew B. Brookover @ 2006-09-20 13:55 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 991 bytes --]
I have used LVM on top of software raid and ISCSI. It works well. It
also helps keep track of what device is where. ISCSI does not export
its targets in the same order, some times sdb shows up as sdc.... LVM
will keep track of what is what.
Matt
On Tue, 2006-09-19 at 16:53 -0500, Alexander Lazarevich wrote:
> We have several RAID devices (16-24 drive Fiber/SCSI attached RAID) which
> are currently single devices on our 64bit linux servers (RHEL-4, core5).
> We are considering LVM'ing 2 or more of the RAID's into a LVM group. I
> don't doubt the reliability and robustness of LVM2 on single drives, but I
> worry about it on top of RAID devices.
>
> Does anyone have any experience with LVM on to of RAID volumes, positive
> or negative?
>
> Thanks,
>
> Alex
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
[-- Attachment #2: Type: text/html, Size: 1832 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] LVM on RAID
2006-09-20 13:55 ` Matthew B. Brookover
@ 2006-09-20 14:01 ` Michael T. Babcock
2006-09-20 14:48 ` Alexander Lazarevich
2006-09-20 17:24 ` Mark H. Wood
2 siblings, 0 replies; 34+ messages in thread
From: Michael T. Babcock @ 2006-09-20 14:01 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 987 bytes --]
I use LVM2 on software and hardware raid on a regular basis. Almost
every server I configure has hardware raid and all the machines I
support run LVM. No problems to report.
Matthew B. Brookover wrote:
> I have used LVM on top of software raid and ISCSI. It works well. It
> also helps keep track of what device is where. ISCSI does not export
> its targets in the same order, some times sdb shows up as sdc....
> LVM will keep track of what is what.
>
> On Tue, 2006-09-19 at 16:53 -0500, Alexander Lazarevich wrote:
>> We have several RAID devices (16-24 drive Fiber/SCSI attached RAID) which
>> are currently single devices on our 64bit linux servers (RHEL-4, core5).
>> We are considering LVM'ing 2 or more of the RAID's into a LVM group. I
>> don't doubt the reliability and robustness of LVM2 on single drives, but I
>> worry about it on top of RAID devices.
>>
>> Does anyone have any experience with LVM on to of RAID volumes, positive
>> or negative?
>>
[-- Attachment #2: Type: text/html, Size: 1645 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] LVM on RAID
2006-09-20 13:55 ` Matthew B. Brookover
2006-09-20 14:01 ` Michael T. Babcock
@ 2006-09-20 14:48 ` Alexander Lazarevich
2006-09-20 15:57 ` Fabien Jakimowicz
2006-09-21 3:19 ` Andrew Boyko
2006-09-20 17:24 ` Mark H. Wood
2 siblings, 2 replies; 34+ messages in thread
From: Alexander Lazarevich @ 2006-09-20 14:48 UTC (permalink / raw)
To: LVM general discussion and development
I should have been more clear. I'm not worried about LVM on one RAID. My
questions is specifically about creating an LVM volume group ACROSS two
RAID's.
For example, we have a 64bit linux server, with two different RAID devices
attached to the host via Fiber. These RAID's are each 4TB volumes. The
RAID is attached as /dev/sda and /dev/sdb. What I'm asking about is
creating a LVM volume group, and joining /dev/sda AND /dev/sdb to that
same volume group, creating the lv of 8TB (minus overhead of course), and
then creating a filesystem on that lv. A 8TB filesystem, which is spanned
(via LVM) across both RAID's.
Does anyone here do that? Reading all the reply's I realize I wasn't clear
enough about that, and neither was anyone's responses.
Alex
On Wed, 20 Sep 2006, Matthew B. Brookover wrote:
> I have used LVM on top of software raid and ISCSI. It works well. It
> also helps keep track of what device is where. ISCSI does not export
> its targets in the same order, some times sdb shows up as sdc.... LVM
> will keep track of what is what.
>
> Matt
>
> On Tue, 2006-09-19 at 16:53 -0500, Alexander Lazarevich wrote:
>
>> We have several RAID devices (16-24 drive Fiber/SCSI attached RAID) which
>> are currently single devices on our 64bit linux servers (RHEL-4, core5).
>> We are considering LVM'ing 2 or more of the RAID's into a LVM group. I
>> don't doubt the reliability and robustness of LVM2 on single drives, but I
>> worry about it on top of RAID devices.
>>
>> Does anyone have any experience with LVM on to of RAID volumes, positive
>> or negative?
>>
>> Thanks,
>>
>> Alex
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* RE: [linux-lvm] LVM on RAID
@ 2006-09-20 14:59 Marlier, Ian
0 siblings, 0 replies; 34+ messages in thread
From: Marlier, Ian @ 2006-09-20 14:59 UTC (permalink / raw)
To: LVM general discussion and development
> -----Original Message-----
> From: linux-lvm-bounces@redhat.com
[mailto:linux-lvm-bounces@redhat.com]
> On Behalf Of Alexander Lazarevich
> Sent: Wednesday, September 20, 2006 10:48 AM
> To: LVM general discussion and development
> Subject: Re: [linux-lvm] LVM on RAID
>
> I should have been more clear. I'm not worried about LVM on one RAID.
My
> questions is specifically about creating an LVM volume group ACROSS
two
> RAID's.
>
> For example, we have a 64bit linux server, with two different RAID
devices
> attached to the host via Fiber. These RAID's are each 4TB volumes. The
> RAID is attached as /dev/sda and /dev/sdb. What I'm asking about is
> creating a LVM volume group, and joining /dev/sda AND /dev/sdb to that
> same volume group, creating the lv of 8TB (minus overhead of course),
and
> then creating a filesystem on that lv. A 8TB filesystem, which is
spanned
> (via LVM) across both RAID's.
>
> Does anyone here do that? Reading all the reply's I realize I wasn't
clear
> enough about that, and neither was anyone's responses.
I'm not using fiber, but I have a very similar setup: 2 x 3ware 9550SX
hardware raid cards, with 3.73GB of SATA disk attached to each, exported
as /dev/sdb and /dev/sdc. I've got a single VG across them.
I allocate logical volumes as needed, rather than having a single
massive LV spanned across the whole thing. At least one of those LVs
has to span across the RAID div, though, as I've got more allocated than
the capacity of a single RAID device.
All in all, it's been a rock-solid setup, and very fast.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] LVM on RAID
2006-09-20 14:48 ` Alexander Lazarevich
@ 2006-09-20 15:57 ` Fabien Jakimowicz
2006-09-21 3:19 ` Andrew Boyko
1 sibling, 0 replies; 34+ messages in thread
From: Fabien Jakimowicz @ 2006-09-20 15:57 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 1088 bytes --]
On Wed, 2006-09-20 at 09:48 -0500, Alexander Lazarevich wrote:
> I should have been more clear. I'm not worried about LVM on one RAID. My
> questions is specifically about creating an LVM volume group ACROSS two
> RAID's.
>
> For example, we have a 64bit linux server, with two different RAID devices
> attached to the host via Fiber. These RAID's are each 4TB volumes. The
> RAID is attached as /dev/sda and /dev/sdb. What I'm asking about is
> creating a LVM volume group, and joining /dev/sda AND /dev/sdb to that
> same volume group, creating the lv of 8TB (minus overhead of course), and
> then creating a filesystem on that lv. A 8TB filesystem, which is spanned
> (via LVM) across both RAID's.
>
> Does anyone here do that? Reading all the reply's I realize I wasn't clear
> enough about that, and neither was anyone's responses.
>
> Alex
I have a 'similar' configuration, with less expansive hardware ;)
i have an LVM VG built with 4 software raid arrays, each array is a two
hard drives in raid1.
--
Fabien Jakimowicz <fabien@jakimowicz.com>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] LVM on RAID
2006-09-20 13:55 ` Matthew B. Brookover
2006-09-20 14:01 ` Michael T. Babcock
2006-09-20 14:48 ` Alexander Lazarevich
@ 2006-09-20 17:24 ` Mark H. Wood
2 siblings, 0 replies; 34+ messages in thread
From: Mark H. Wood @ 2006-09-20 17:24 UTC (permalink / raw)
To: mbrookov, LVM general discussion and development
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
We've been doing that on a number of hosts, using Compaq SMART2, HP Smart
Array, and Dell PERC II/PERC 2/PERC 4 hardware SCSI RAID cards for several
years. The combination's never been a bit of trouble, and I'll add my
vote to those who wouldn't think of doing a server without some sort of
storage redundancy.
Likewise I wouldn't set up a server without LVM. I was spoiled by the
flexibility of the Netware 4 storage management layer and am loath to give
it up just because we don't run Netware anymore.
- --
Mark H. Wood, Lead System Programmer mwood@IUPUI.Edu
Typically when a software vendor says that a product is "intuitive" he
means the exact opposite.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.4 (GNU/Linux)
Comment: pgpenvelope 2.10.2 - http://pgpenvelope.sourceforge.net/
iD8DBQFFEXlms/NR4JuTKG8RAiX9AKCq3hIMBz/uF3WnTNHzClnxfCdsXwCePY3K
9M8gfDdsD1ShjeJo+5RZ/JE=
=wQc5
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation
2006-09-20 11:27 ` Fabien Jakimowicz
@ 2006-09-20 19:45 ` Barnaby Claydon
0 siblings, 0 replies; 34+ messages in thread
From: Barnaby Claydon @ 2006-09-20 19:45 UTC (permalink / raw)
To: LVM general discussion and development
Fabien Jakimowicz wrote:
> On Wed, 2006-09-20 at 03:22 +0000, Mark Krenz wrote:
>
>> Personally I like it when documentation is kept simple and uses simple
>> examples. There is nothing worse than when you are trying to learn
>> something and it tells you how to how to intantiate a variable, and then
>> it immediately goes on to show you how to make some complicated reference
>> to it using some code.
>>
>> I agree with you though, its probably a good idea to steer newcomers
>> in the right direction on disk management and a few notes about doing
>> LVM ontop of RAID being a good idea couldn't hurt. This is especially
>> so since I've heard three mentions of people using LVM on a server
>> without doing RAID this week alone. :-/
>>
> We should add something in faq page
> ( http://tldp.org/HOWTO/LVM-HOWTO/lvm2faq.html ), like "i've lost one of
> my hard drive and i can't mount my lv, did i lost everything ?" followed
> by a quick explanation : lvm is NOT faulty tolerant like raid1/5, if you
> lose a PV, you lose every LV which was (even partially) on on it.
>
<snip>
Not to nit-pick, but when one of my multi-PV LVs experienced a single PV
failure, I did NOT lose all the data on the LV. The VG was using linear
spanning, so using the --partial parameter with read-only file-system
mounting (XFS in my case) I recovered all the data from the LV that
wasn't physically spanned onto the failed PV.
If this was some sort of miracle and shouldn't have worked, well I
suppose I'll count my blessings but it seemed perfectly reasonable at
the time.
:)
-Barnaby
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation (was: HDD Failure)
2006-09-20 13:22 ` [linux-lvm] Misleading documentation (was: HDD Failure) Tobias Bluhm
@ 2006-09-20 23:21 ` Scott Lamb
0 siblings, 0 replies; 34+ messages in thread
From: Scott Lamb @ 2006-09-20 23:21 UTC (permalink / raw)
To: LVM general discussion and development
On Sep 20, 2006, at 6:22 AM, Tobias Bluhm wrote:
> Scott Lamb wrote on 09/19/2006 06:40:43 PM:
>> This is my biggest beef with LVM - why doesn't *any* of the
>> documentation point this out? There are very few good reasons to use
>> LVM without RAID, and "ignorance" certainly isn't among them. I don't
>> see any mention of RAID or disk failures in the manual pages or in
>> the HOWTO.
>
> Good point. So if the docs make no mention of RAID or fail-over or
> redundency or spares, what made you assume that
> LVM had such capabilities????
If that "you" is second-person singular, I didn't. I probed deep
enough to discover that it did not do RAID, but there were enough
suggestions that it could do redundancy that it took me a while to
come to that conclusion:
(1) the many-disk examples, which would be a horrible idea otherwise
(2) the striping tuning options
(3) some vague mention of mirroring...Heinz has mentioned it in this
thread, but I don't see any actual information about how it's done.
(4) ...probably more I'm forgetting...
So I don't know why you are surprised that people are assuming this
and ending up without raid. (Did you see Mark Krenz's email? Three
people this week, one of whom actually lost data because of it.)
All is fixable. Fabien's suggested changes are a great improvement.
--
Scott Lamb <http://www.slamb.org/>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] LVM on RAID
2006-09-20 14:48 ` Alexander Lazarevich
2006-09-20 15:57 ` Fabien Jakimowicz
@ 2006-09-21 3:19 ` Andrew Boyko
1 sibling, 0 replies; 34+ messages in thread
From: Andrew Boyko @ 2006-09-21 3:19 UTC (permalink / raw)
To: LVM general discussion and development
We do this: we have, connected to an amd64 server via a QLogic
QLA-2340 to an FC switch, several RAID devices (XServe RAID, Nexsan
SATABeast) totalling 15 PVs in a single 27TB VG, with various XFS
filesystems, several of which approach 8-10TB. These filesystems are
then exported to a small lab of NFS clients. The vgdisplay:
--- Volume group ---
VG Name Storage2
System ID
Format lvm2
Metadata Areas 15
Metadata Sequence No 39
VG Access read/write
VG Status resizable
MAX LV 256
Cur LV 8
Open LV 7
Max PV 256
Cur PV 15
Act PV 15
VG Size 27.74 TB
PE Size 32.00 MB
Total PE 909076
Alloc PE / Size 905658 / 27.64 TB
Free PE / Size 3418 / 106.81 GB
I'm only mostly (rather than completely) convinced of the
righteousness of this approach, but it's been working for us, give or
take a dramatic episode that we never managed to completely diagnose
(the SCSI layer blew up in a way that acted like a hardware failure,
but with ultimately no clear evidence that the heart of the fault
wasn't an XFS corruption and no real device problem). We have lost
and replaced drives in the RAID arrays, though it is a little heart-
stopping.
We're doing no meaningful fail-over, other than having a second
server configured and ready to replace the primary one in the case of
server catastrophe (which has happened). No multipathing, no
clustering.
Any commentary on the appropriateness of this approach? Occasionally
I look over at ZFS and the way it collapses the software stack to a
single component, and get a little jealous...
Regards,
Andy Boyko andy@boyko.net
On Sep 20, 2006, at 10:48 AM, Alexander Lazarevich wrote:
> I should have been more clear. I'm not worried about LVM on one
> RAID. My questions is specifically about creating an LVM volume
> group ACROSS two RAID's.
>
> For example, we have a 64bit linux server, with two different RAID
> devices attached to the host via Fiber. These RAID's are each 4TB
> volumes. The RAID is attached as /dev/sda and /dev/sdb. What I'm
> asking about is creating a LVM volume group, and joining /dev/sda
> AND /dev/sdb to that same volume group, creating the lv of 8TB
> (minus overhead of course), and then creating a filesystem on that
> lv. A 8TB filesystem, which is spanned (via LVM) across both RAID's.
>
> Does anyone here do that? Reading all the reply's I realize I
> wasn't clear enough about that, and neither was anyone's responses.
>
> Alex
>
> On Wed, 20 Sep 2006, Matthew B. Brookover wrote:
>
>> I have used LVM on top of software raid and ISCSI. It works
>> well. It
>> also helps keep track of what device is where. ISCSI does not export
>> its targets in the same order, some times sdb shows up as
>> sdc.... LVM
>> will keep track of what is what.
>>
>> Matt
>>
>> On Tue, 2006-09-19 at 16:53 -0500, Alexander Lazarevich wrote:
>>
>>> We have several RAID devices (16-24 drive Fiber/SCSI attached
>>> RAID) which
>>> are currently single devices on our 64bit linux servers (RHEL-4,
>>> core5).
>>> We are considering LVM'ing 2 or more of the RAID's into a LVM
>>> group. I
>>> don't doubt the reliability and robustness of LVM2 on single
>>> drives, but I
>>> worry about it on top of RAID devices.
>>>
>>> Does anyone have any experience with LVM on to of RAID volumes,
>>> positive
>>> or negative?
>>>
>>> Thanks,
>>>
>>> Alex
>>>
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation
2006-09-20 12:41 ` Les Mikesell
@ 2006-09-21 9:42 ` Heinz Mauelshagen
2006-09-21 12:52 ` Les Mikesell
0 siblings, 1 reply; 34+ messages in thread
From: Heinz Mauelshagen @ 2006-09-21 9:42 UTC (permalink / raw)
To: Les Mikesell; +Cc: LVM general discussion and development, mauelshagen
On Wed, Sep 20, 2006 at 07:41:45AM -0500, Les Mikesell wrote:
> On Wed, 2006-09-20 at 06:40, Heinz Mauelshagen wrote:
>
> > > >This is my biggest beef with LVM - why doesn't *any* of the
> > > >documentation point this out? There are very few good reasons to use LVM
> > > >without RAID, and "ignorance" certainly isn't among them. I don't see
> > > >any mention of RAID or disk failures in the manual pages or in the HOWTO.
> > >
> > > Mirroring should be dealt with in LVM itself, by mapping more PEs on
> > > physical seperate PVs to each LE. From the little information I've seen
> > > about the new LVM mirroring, it seems that the mirrors are defined on LV
> > > level instead of LE/PE. But after some JFGI sessions nothing really
> > > enlightening has shown up.
> >
> > LVM2 does mirror by grouping multiple (hidden) logical volumes
> > into a mirror set.
>
> Where would I find example commands to add a mirror, check
> the mirroring status, break it, reuse the other half, etc.?
'man lvcreate' to start with.
> Is it possible to mirror a snapshot instead of the changing
> volume?
Not yet.
>
> --
> Les Mikesell
> lesmikesell@gmail.com
>
--
Regards,
Heinz -- The LVM Guy --
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Heinz Mauelshagen Red Hat GmbH
Consulting Development Engineer Am Sonnenhang 11
Storage Development 56242 Marienrachdorf
Germany
Mauelshagen@RedHat.com PHONE +49 171 7803392
FAX +49 2626 924446
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] Misleading documentation
2006-09-21 9:42 ` Heinz Mauelshagen
@ 2006-09-21 12:52 ` Les Mikesell
0 siblings, 0 replies; 34+ messages in thread
From: Les Mikesell @ 2006-09-21 12:52 UTC (permalink / raw)
To: mauelshagen; +Cc: LVM general discussion and development
On Thu, 2006-09-21 at 04:42, Heinz Mauelshagen wrote:
> > > >
> > > > Mirroring should be dealt with in LVM itself, by mapping more PEs on
> > > > physical seperate PVs to each LE. From the little information I've seen
> > > > about the new LVM mirroring, it seems that the mirrors are defined on LV
> > > > level instead of LE/PE. But after some JFGI sessions nothing really
> > > > enlightening has shown up.
> > >
> > > LVM2 does mirror by grouping multiple (hidden) logical volumes
> > > into a mirror set.
> >
> > Where would I find example commands to add a mirror, check
> > the mirroring status, break it, reuse the other half, etc.?
>
> 'man lvcreate' to start with.
>
> > Is it possible to mirror a snapshot instead of the changing
> > volume?
>
> Not yet.
Are the parts that do work version-specific? What is the
earliest version where the mirror support is reliable
and has that made it into the popular distributions yet?
--
Les Mikesell
lesmikesell@gmail.com
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [linux-lvm] LVM on RAID
2006-09-19 22:03 ` Roger Lucas
@ 2006-09-24 17:19 ` Nix
0 siblings, 0 replies; 34+ messages in thread
From: Nix @ 2006-09-24 17:19 UTC (permalink / raw)
To: LVM general discussion and development
On Tue, 19 Sep 2006, Roger Lucas said:
> and boot directly into an LVM root partition with initrd.
I do exactly the same thing, with multiple RAID-5 arrays and an LVM
image striped over all of them at once. It works fine.
(I use initramfs for booting instead of initrd: my current boot
setup is at <http://linux-raid.osdl.org/index.php/RAID_Boot>.)
--
`In typical emacs fashion, it is both absurdly ornate and
still not really what one wanted.' --- jdev
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2006-09-24 17:19 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-09-18 18:10 [linux-lvm] HDD Failure Nick
2006-09-18 19:08 ` Mitch Miller
2006-09-18 19:13 ` Nick
2006-09-18 19:23 ` Fabien Jakimowicz
2006-09-18 19:34 ` Nick
2006-09-18 19:37 ` Mark Krenz
2006-09-19 22:40 ` [linux-lvm] Misleading documentation (was: HDD Failure) Scott Lamb
2006-09-20 0:35 ` Fabien Jakimowicz
2006-09-20 3:22 ` Mark Krenz
2006-09-20 11:27 ` Fabien Jakimowicz
2006-09-20 19:45 ` [linux-lvm] Misleading documentation Barnaby Claydon
2006-09-20 10:50 ` Morten Torstensen
2006-09-20 11:40 ` Heinz Mauelshagen
2006-09-20 12:41 ` Les Mikesell
2006-09-21 9:42 ` Heinz Mauelshagen
2006-09-21 12:52 ` Les Mikesell
2006-09-20 13:22 ` [linux-lvm] Misleading documentation (was: HDD Failure) Tobias Bluhm
2006-09-20 23:21 ` Scott Lamb
2006-09-18 19:42 ` [linux-lvm] HDD Failure Fabien Jakimowicz
2006-09-18 19:52 ` Nick
2006-09-18 19:57 ` Fabien Jakimowicz
2006-09-19 21:53 ` [linux-lvm] LVM on RAID Alexander Lazarevich
2006-09-19 22:03 ` Roger Lucas
2006-09-24 17:19 ` Nix
2006-09-19 22:04 ` Mark Krenz
2006-09-19 22:11 ` Michael Loftis
2006-09-20 0:30 ` Fabien Jakimowicz
2006-09-20 13:55 ` Matthew B. Brookover
2006-09-20 14:01 ` Michael T. Babcock
2006-09-20 14:48 ` Alexander Lazarevich
2006-09-20 15:57 ` Fabien Jakimowicz
2006-09-21 3:19 ` Andrew Boyko
2006-09-20 17:24 ` Mark H. Wood
-- strict thread matches above, loose matches on Subject: below --
2006-09-20 14:59 Marlier, Ian
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).