* [linux-lvm] LVM Thin Provisioning size limited to 16 GiB?
@ 2012-03-02 13:44 Sebastian Riemer
2012-03-02 17:17 ` Mike Snitzer
0 siblings, 1 reply; 4+ messages in thread
From: Sebastian Riemer @ 2012-03-02 13:44 UTC (permalink / raw)
To: LVM general discussion and development
Hi list,
I've tested LVM thin provisioning with the latest LVM user-space from
git and today together with kernel 3.2.7.
I've got 24 SAS HDDs put together into 12 MD RAID-1 arrays. So I want to
have a thin pool with striping over all RAID-1 arrays. But this seems to
be size limited to 16 GiB. With bigger size the pool can't be activated
and LVM can't be removed any more - forces me to reboot.
I've also tested to explicitly set the --poolmetadatasize to 16 GiB and
the data pool to 100 GiB, but same result. I also did some benchmarks.
Performance wasn't that bad, but could be really better (at least doubled).
Is this the current development state or do I do something wrong?
Here are my commands:
vgcreate test /dev/md/test*
lvcreate -i 12 -I 64 -L 16G -T test/pool
lvcreate -V 45G -T test/pool -n test00
Furthermore, when writing and afterwards reading to/from the thin LV it
is only possible with up to 11 GiB. Then there are messages like the
following in the kernel log.
device-mapper: space map metadata: out of metadata space
device-mapper: thin: dm_thin_insert_block() failed
Seems like pool meta-data and pool data aren't separated at current
development state.
Regards,
Sebastian Riemer
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [linux-lvm] LVM Thin Provisioning size limited to 16 GiB?
2012-03-02 13:44 [linux-lvm] LVM Thin Provisioning size limited to 16 GiB? Sebastian Riemer
@ 2012-03-02 17:17 ` Mike Snitzer
2012-03-05 10:20 ` Sebastian Riemer
0 siblings, 1 reply; 4+ messages in thread
From: Mike Snitzer @ 2012-03-02 17:17 UTC (permalink / raw)
To: Sebastian Riemer; +Cc: LVM general discussion and development
On Fri, Mar 02 2012 at 8:44am -0500,
Sebastian Riemer <sebastian.riemer@profitbricks.com> wrote:
> Hi list,
>
> I've tested LVM thin provisioning with the latest LVM user-space from
> git and today together with kernel 3.2.7.
>
> I've got 24 SAS HDDs put together into 12 MD RAID-1 arrays. So I want to
> have a thin pool with striping over all RAID-1 arrays. But this seems to
> be size limited to 16 GiB. With bigger size the pool can't be activated
> and LVM can't be removed any more - forces me to reboot.
>
> I've also tested to explicitly set the --poolmetadatasize to 16 GiB and
> the data pool to 100 GiB, but same result. I also did some benchmarks.
> Performance wasn't that bad, but could be really better (at least doubled).
>
> Is this the current development state or do I do something wrong?
You haven't actually shown how you attempted to make use of a 100GB and
16GB metadatasize.
But the maximum metadata device size is 17112760320 sectors (or 15.9375
GB).
So try with 15GB (even though that is way larger than you need for 100GB
of data).
> Here are my commands:
> vgcreate test /dev/md/test*
> lvcreate -i 12 -I 64 -L 16G -T test/pool
> lvcreate -V 45G -T test/pool -n test00
>
> Furthermore, when writing and afterwards reading to/from the thin LV it
> is only possible with up to 11 GiB. Then there are messages like the
> following in the kernel log.
>
> device-mapper: space map metadata: out of metadata space
> device-mapper: thin: dm_thin_insert_block() failed
>
> Seems like pool meta-data and pool data aren't separated at current
> development state.
But in the above test, you've created a striped LV named /dev/test/pool
of 16GB.
And you've written 11GB to the test00 thin device. And you're running
out of metadata space.
This implies to me that LVM2's size guesstimate for the proper data vs
metadata size split for a 16GB volume isn't conservative enough
(relative to metadata size).
Anyway, showing your 'dmsetup table' output would be helpful in the
future.
Mike
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [linux-lvm] LVM Thin Provisioning size limited to 16 GiB?
2012-03-02 17:17 ` Mike Snitzer
@ 2012-03-05 10:20 ` Sebastian Riemer
2012-03-09 16:01 ` Zdenek Kabelac
0 siblings, 1 reply; 4+ messages in thread
From: Sebastian Riemer @ 2012-03-05 10:20 UTC (permalink / raw)
To: Mike Snitzer; +Cc: LVM general discussion and development
On 02/03/12 18:17, Mike Snitzer wrote:
>>
>> I've also tested to explicitly set the --poolmetadatasize to 16 GiB and
>> the data pool to 100 GiB, but same result. I also did some benchmarks.
>> Performance wasn't that bad, but could be really better (at least doubled).
>>
>
> You haven't actually shown how you attempted to make use of a 100GB and
> 16GB metadatasize.
>
> But the maximum metadata device size is 17112760320 sectors (or 15.9375
> GB).
>
> So try with 15GB (even though that is way larger than you need for 100GB
> of data).
I've tried these commands:
vgcreate test /dev/md/test*
lvcreate -i 12 -I 64 -L 100G --poolmetadatasize 16G -T test/pool
I don't see any chance to select the meta-data device in LVM like it is
possible with dmsetup.
>> Here are my commands:
>> vgcreate test /dev/md/test*
>> lvcreate -i 12 -I 64 -L 16G -T test/pool
>> lvcreate -V 45G -T test/pool -n test00
This is like it is described in the man page of lvcreate. There it is
documented as a single lvcreate command.
This creates five dm devices in /dev/mapper:
test-pool: 16,106,127,360 Bytes, 254:3, table: 0 31457280 linear 254:2 0
test-pool_tdata: 16,106,127,360 Bytes, 254:1,
table: 0 31457280 striped 12 128 9:127 2048 9:126 2048 9:125 2048 9:124
2048 9:123 2048 9:122 2048 9:121 2048 9:120 2048 9:119 2048 9:118 2048
9:117 2048 9:116 2048
test-pool_tmeta: 4,194,304 Bytes, 254:0, table: 0 8192 linear 9:116 2623488
test-pool-tpool: 16,106,127,360 Bytes, 254:2
table: 0 31457280 thin-pool 254:0 254:1 128 0 0
test-test00: 48,318,382,080 Bytes, 254:4, table: 0 94371840 thin 254:2 1
>> Seems like pool meta-data and pool data aren't separated at current
>> development state.
>
> But in the above test, you've created a striped LV named /dev/test/pool
> of 16GB.
>
> And you've written 11GB to the test00 thin device. And you're running
> out of metadata space.
>
> This implies to me that LVM2's size guesstimate for the proper data vs
> metadata size split for a 16GB volume isn't conservative enough
> (relative to metadata size).
>
> Anyway, showing your 'dmsetup table' output would be helpful in the
> future.
The meta-data is on one of the striped devices used for the data. This
is wrong. LVM should support selecting a separate meta-data device!
Sebastian
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [linux-lvm] LVM Thin Provisioning size limited to 16 GiB?
2012-03-05 10:20 ` Sebastian Riemer
@ 2012-03-09 16:01 ` Zdenek Kabelac
0 siblings, 0 replies; 4+ messages in thread
From: Zdenek Kabelac @ 2012-03-09 16:01 UTC (permalink / raw)
To: LVM general discussion and development; +Cc: Sebastian Riemer, Mike Snitzer
Dne 5.3.2012 11:20, Sebastian Riemer napsal(a):
> On 02/03/12 18:17, Mike Snitzer wrote:
>>>
>>> I've also tested to explicitly set the --poolmetadatasize to 16 GiB and
>>> the data pool to 100 GiB, but same result. I also did some benchmarks.
>>> Performance wasn't that bad, but could be really better (at least doubled).
>>>
>>
>> You haven't actually shown how you attempted to make use of a 100GB and
>> 16GB metadatasize.
>>
>> But the maximum metadata device size is 17112760320 sectors (or 15.9375
>> GB).
>>
>> So try with 15GB (even though that is way larger than you need for 100GB
>> of data).
>
> I've tried these commands:
> vgcreate test /dev/md/test*
> lvcreate -i 12 -I 64 -L 100G --poolmetadatasize 16G -T test/pool
>
> I don't see any chance to select the meta-data device in LVM like it is
> possible with dmsetup.
>
>>> Here are my commands:
>>> vgcreate test /dev/md/test*
>>> lvcreate -i 12 -I 64 -L 16G -T test/pool
>>> lvcreate -V 45G -T test/pool -n test00
>
> This is like it is described in the man page of lvcreate. There it is
> documented as a single lvcreate command.
>
> This creates five dm devices in /dev/mapper:
> test-pool: 16,106,127,360 Bytes, 254:3, table: 0 31457280 linear 254:2 0
>
> test-pool_tdata: 16,106,127,360 Bytes, 254:1,
Ok - for now the logic goes like if you pass a list of PVs on lvcreate command
line than metadata is allocated from the last one - but it's
current undocumented behavior which may change between version - so nothing
I'd suggest for use - but may work for now.
So for strip of 12 devices you would need 13 PVs - then strip should occupy
first 12 - and metadata device should land on the final 13 PV.
In the near future there will be lvconvert support, where you just select an
LV for metadata and another LV for data pool - should be quite easy to use then.
For now - you could always use 'pvmove -n pool_tmeta srcPV dstPV' to
relocate metadata extents on a PV you want - since metadata shouldn't be too
large the operation should be quite fast.
> table: 0 31457280 striped 12 128 9:127 2048 9:126 2048 9:125 2048 9:124
> 2048 9:123 2048 9:122 2048 9:121 2048 9:120 2048 9:119 2048 9:118 2048
> 9:117 2048 9:116 2048
>
> test-pool_tmeta: 4,194,304 Bytes, 254:0, table: 0 8192 linear 9:116 2623488
>
> test-pool-tpool: 16,106,127,360 Bytes, 254:2
> table: 0 31457280 thin-pool 254:0 254:1 128 0 0
>
> test-test00: 48,318,382,080 Bytes, 254:4, table: 0 94371840 thin 254:2 1
>
>>> Seems like pool meta-data and pool data aren't separated at current
>>> development state.
They are separated and in fact, internally they behave like allocation of
mirror log device.
Zdenek
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2012-03-09 16:01 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-02 13:44 [linux-lvm] LVM Thin Provisioning size limited to 16 GiB? Sebastian Riemer
2012-03-02 17:17 ` Mike Snitzer
2012-03-05 10:20 ` Sebastian Riemer
2012-03-09 16:01 ` Zdenek Kabelac
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).