* LVM->RAID->LVM
@ 2009-05-24 18:31 Billy Crook
2009-05-24 19:46 ` LVM->RAID->LVM Peter Rabbitson
2009-05-25 12:32 ` LVM->RAID->LVM Goswin von Brederlow
0 siblings, 2 replies; 3+ messages in thread
From: Billy Crook @ 2009-05-24 18:31 UTC (permalink / raw)
To: linux-raid
I use LVM on top of raid (between raid and the filesystem). I chose
that so I could export the LV's as iSCSI LUNs for different machines
for different purposes. I've been thinking lately though, about using
LVM also, below raid (between the partitions and raid). This could
let me 'migrate out' a disk without degrading redundancy of the raid
array, but I think it could get a little complicated. Then again
there was a day when I thought LVM was too complicating to be worth it
at all.
If anyone here has done an 'LVM->RAID->LVM sandwich' before, do you
think it was worth it? My understanding of LVM is that its overhead
is minimal, but would this amount of redirection start to be a
problem? What about detection during boot? I assume if I did this,
I'd want a separate volume group for every raid component. Each
exporting only one LV and consuming only one PV until I want to move
that component to another disk. I'm using RHEL/CentOS 5.3 and most of
my storage is served over iSCSI. Some over NFS and CIFS.
What 'stacks' have you used from disk to filesystem, and what have
been your experiences? (Feel free to reply direct on this so this
doesn't become one giant polling thread.)
I'm using partitioning->RAID->LVM->iSCSI->LUKS->EXT4. It's been
reasonably fast and flexible, except with growing the raid arrays of
course. The mdadm --grow operations seem to freeze md# I/O until the
its past the critical section which I imagine is by design, but it
freezes and waits even if it can't start the grow because a previous
reshape or resync is still in progress. There's also the slight
nuisance of iscsi not passing through the new space after growing the
LV until you --delete the LUN and re-add it, but I guess I can live
with that.
I do wish there was an 'mdadm --block-until-all-healthy' that you
could use in scripts to block script execution until there were no
current or DELAYED resyncs or reshapes.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: LVM->RAID->LVM
2009-05-24 18:31 LVM->RAID->LVM Billy Crook
@ 2009-05-24 19:46 ` Peter Rabbitson
2009-05-25 12:32 ` LVM->RAID->LVM Goswin von Brederlow
1 sibling, 0 replies; 3+ messages in thread
From: Peter Rabbitson @ 2009-05-24 19:46 UTC (permalink / raw)
To: Billy Crook; +Cc: linux-raid
Billy Crook wrote:
> <snip>
> I do wish there was an 'mdadm --block-until-all-healthy' that you
> could use in scripts to block script execution until there were no
> current or DELAYED resyncs or reshapes.
You mean mdadm -W ?
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: LVM->RAID->LVM
2009-05-24 18:31 LVM->RAID->LVM Billy Crook
2009-05-24 19:46 ` LVM->RAID->LVM Peter Rabbitson
@ 2009-05-25 12:32 ` Goswin von Brederlow
1 sibling, 0 replies; 3+ messages in thread
From: Goswin von Brederlow @ 2009-05-25 12:32 UTC (permalink / raw)
To: Billy Crook; +Cc: linux-raid
Billy Crook <billycrook@gmail.com> writes:
> I use LVM on top of raid (between raid and the filesystem). I chose
> that so I could export the LV's as iSCSI LUNs for different machines
> for different purposes. I've been thinking lately though, about using
> LVM also, below raid (between the partitions and raid). This could
> let me 'migrate out' a disk without degrading redundancy of the raid
> array, but I think it could get a little complicated. Then again
> there was a day when I thought LVM was too complicating to be worth it
> at all.
>
> If anyone here has done an 'LVM->RAID->LVM sandwich' before, do you
> think it was worth it? My understanding of LVM is that its overhead
I tried it once and gave it up again. The problem is that a raid
resync only uses idle I/O but any I/O on lvm gets flaged as the devcie
being used. As a result you consistently get the minimum resync speed
of 1MiB/s (or whatever you set it). Never more. And if you increase
the minimum speed it takes I/O away from when the devcie realy isn't
idle.
> is minimal, but would this amount of redirection start to be a
> problem? What about detection during boot? I assume if I did this,
Yuo need to ensure the lvm detection is run twice, or triggered after
each new block device passes through udev.
> I'd want a separate volume group for every raid component. Each
> exporting only one LV and consuming only one PV until I want to move
> that component to another disk. I'm using RHEL/CentOS 5.3 and most of
> my storage is served over iSCSI. Some over NFS and CIFS.
You certainly don't want multiple PVs in a volume group as any disk
failure takes down the group (stupid userspace).
> What 'stacks' have you used from disk to filesystem, and what have
> been your experiences? (Feel free to reply direct on this so this
> doesn't become one giant polling thread.)
Longest chain so far was:
sata -> raid -> dmcrypt -> lvm -> xen block device -> raid -> lvm -> ext3
That was for testing some raid stuff in a xen virtual domain. Only
reason I had to have raid twice so far.
MfG
Goswin
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2009-05-25 12:32 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-24 18:31 LVM->RAID->LVM Billy Crook
2009-05-24 19:46 ` LVM->RAID->LVM Peter Rabbitson
2009-05-25 12:32 ` LVM->RAID->LVM Goswin von Brederlow
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).