linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] lvm/openstack: stripe all volumes and distribute them round robin on PVs
@ 2017-05-10 11:13 Bernd
  2017-05-10 16:58 ` David Teigland
  0 siblings, 1 reply; 3+ messages in thread
From: Bernd @ 2017-05-10 11:13 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1038 bytes --]

Hello,

when using local lvm2 logical volumes for OpenStack Nova ephemeral pool
then the created logical volumes are linear (not striped) and what is worse
than that, all volumes are allocated on the same first physical volume
(until it is filled up).

I made a ServerFault question about that:

https://serverfault.com/questions/849088/automatically-distribute-lvm-stripes-for-specific-lvm2-vg-on-linux/849255#849255

It was suggested that raid_stripe_all_devices should help (to turn on
striping by default), but it did not. So I went ahead and patched the
lvm.py of nova to turn the striping on. However I still wonder if there is
a lvm option (allocation policy for that).

Besides that, it looks like it is still creating the first stripe on the
first device. It would make more sense to have it (even in the linear case)
to round robin the LVs. Because first of all it might be the most busy part
of the volume and secondly it helps for keeping room for expanding volumes.

Is there anything planned in this direction?

Gruss
Bernd

[-- Attachment #2: Type: text/html, Size: 1326 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [linux-lvm] lvm/openstack: stripe all volumes and distribute them round robin on PVs
  2017-05-10 11:13 [linux-lvm] lvm/openstack: stripe all volumes and distribute them round robin on PVs Bernd
@ 2017-05-10 16:58 ` David Teigland
  2017-05-12  9:25   ` Bernd Eckenfels
  0 siblings, 1 reply; 3+ messages in thread
From: David Teigland @ 2017-05-10 16:58 UTC (permalink / raw)
  To: Bernd; +Cc: linux-lvm

On Wed, May 10, 2017 at 01:13:37PM +0200, Bernd wrote:
> Hello,
> 
> when using local lvm2 logical volumes for OpenStack Nova ephemeral pool
> then the created logical volumes are linear (not striped) and what is worse
> than that, all volumes are allocated on the same first physical volume
> (until it is filled up).
> 
> I made a ServerFault question about that:
> 
> https://serverfault.com/questions/849088/automatically-distribute-lvm-stripes-for-specific-lvm2-vg-on-linux/849255#849255
> 
> It was suggested that raid_stripe_all_devices should help (to turn on
> striping by default), but it did not. So I went ahead and patched the
> lvm.py of nova to turn the striping on. However I still wonder if there is
> a lvm option (allocation policy for that).
> 
> Besides that, it looks like it is still creating the first stripe on the
> first device. It would make more sense to have it (even in the linear case)
> to round robin the LVs. Because first of all it might be the most busy part
> of the volume and secondly it helps for keeping room for expanding volumes.
> 
> Is there anything planned in this direction?

If you're concerned with placement of LVs on PVs, I'd probably skip
striping and add some logic specifying different PVs directly:

lvcreate -n name -L size VG PV ...

means the LV will be created using only the specified PVs.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [linux-lvm] lvm/openstack: stripe all volumes and distribute them round robin on PVs
  2017-05-10 16:58 ` David Teigland
@ 2017-05-12  9:25   ` Bernd Eckenfels
  0 siblings, 0 replies; 3+ messages in thread
From: Bernd Eckenfels @ 2017-05-12  9:25 UTC (permalink / raw)
  To: David Teigland; +Cc: linux-lvm@redhat.com

[-- Attachment #1: Type: text/plain, Size: 2473 bytes --]

Hello David,

Yes I can specify most of the details in the lvcreate, however it is sad that I have to. There are multiple abstraction layers in lvm and I cannot set up a volume group which uses a allocation policy to distribute the volumes over all PV and stripe by default. And it would be even harder when striping to a subset of volumes.

I have to make changes to the client code (in my case Nova's lvm driver) to get a better result. And if I would want to distribute the start segment over physical volumes I would have to add code to actually enumerate them and pick one at random. I had hoped to avoid this. (The next step would be to write DM tables myself...)

In the Nova case the modules are very dynamically created and destroyed, so hand tuning is not an option here.

Gruss
Bernd
--
http://bernd.eckenfels.net
________________________________
From: David Teigland <teigland@redhat.com>
Sent: Wednesday, May 10, 2017 6:58:40 PM
To: Bernd
Cc: linux-lvm@redhat.com
Subject: Re: [linux-lvm] lvm/openstack: stripe all volumes and distribute them round robin on PVs

On Wed, May 10, 2017 at 01:13:37PM +0200, Bernd wrote:
> Hello,
>
> when using local lvm2 logical volumes for OpenStack Nova ephemeral pool
> then the created logical volumes are linear (not striped) and what is worse
> than that, all volumes are allocated on the same first physical volume
> (until it is filled up).
>
> I made a ServerFault question about that:
>
> https://serverfault.com/questions/849088/automatically-distribute-lvm-stripes-for-specific-lvm2-vg-on-linux/849255#849255
>
> It was suggested that raid_stripe_all_devices should help (to turn on
> striping by default), but it did not. So I went ahead and patched the
> lvm.py of nova to turn the striping on. However I still wonder if there is
> a lvm option (allocation policy for that).
>
> Besides that, it looks like it is still creating the first stripe on the
> first device. It would make more sense to have it (even in the linear case)
> to round robin the LVs. Because first of all it might be the most busy part
> of the volume and secondly it helps for keeping room for expanding volumes.
>
> Is there anything planned in this direction?

If you're concerned with placement of LVs on PVs, I'd probably skip
striping and add some logic specifying different PVs directly:

lvcreate -n name -L size VG PV ...

means the LV will be created using only the specified PVs.


[-- Attachment #2: Type: text/html, Size: 4024 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-05-12  9:25 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-05-10 11:13 [linux-lvm] lvm/openstack: stripe all volumes and distribute them round robin on PVs Bernd
2017-05-10 16:58 ` David Teigland
2017-05-12  9:25   ` Bernd Eckenfels

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).