* [linux-lvm] What todo if metadata grows faster than pool data
@ 2014-10-24 14:08 Oliver Rath
2014-10-24 14:18 ` Mike Snitzer
0 siblings, 1 reply; 3+ messages in thread
From: Oliver Rath @ 2014-10-24 14:08 UTC (permalink / raw)
To: LVM general discussion and development
Hi list,
in my thinpool-volume the used data is now 50% but the used metadata
size is 82%. What happens, if the metadata a running full, bevor data is
full? Do I have to modify metadata size explicitly or does lvm this
automaticly? How can I do this?
At the moment 78 volumes or snaps had been created from this
winthinpool, mostly with size about 100GB.
Tfh,
OIiver
--- Logical volume ---
LV Name winthinpool
VG Name dmivg
LV UUID gDGkhM-jGQq-hSuZ-g4CZ-JtcQ-JcdX-DqD7bi
LV Write Access read/write
LV Creation host, time dmicn20, 2014-10-07 23:54:45 +0200
LV Pool metadata winthinpool_tmeta
LV Pool data winthinpool_tdata
LV Status available
# open 73
LV Size 930,00 GiB
Allocated pool data 50,73%
Allocated metadata 82,61%
Current LE 238080
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:18
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [linux-lvm] What todo if metadata grows faster than pool data
2014-10-24 14:08 [linux-lvm] What todo if metadata grows faster than pool data Oliver Rath
@ 2014-10-24 14:18 ` Mike Snitzer
2014-10-24 15:20 ` [linux-lvm] SOLVED [was Re: What todo if metadata grows faster than pool data] Oliver Rath
0 siblings, 1 reply; 3+ messages in thread
From: Mike Snitzer @ 2014-10-24 14:18 UTC (permalink / raw)
To: Oliver Rath; +Cc: LVM general discussion and development
On Fri, Oct 24 2014 at 10:08am -0400,
Oliver Rath <rath@mglug.de> wrote:
> Hi list,
>
> in my thinpool-volume the used data is now 50% but the used metadata
> size is 82%. What happens, if the metadata a running full, bevor data is
> full? Do I have to modify metadata size explicitly or does lvm this
> automaticly? How can I do this?
You'll want to grow the metadata volume. Newer versions of the
dm-thinp kernel code and lvm2 userspace code make this possible.
See the "Manually manage free metadata space of a thin pool LV" section
of the lvmthin.7 manpage in a recent lvm2 release.
^ permalink raw reply [flat|nested] 3+ messages in thread
* [linux-lvm] SOLVED [was Re: What todo if metadata grows faster than pool data]
2014-10-24 14:18 ` Mike Snitzer
@ 2014-10-24 15:20 ` Oliver Rath
0 siblings, 0 replies; 3+ messages in thread
From: Oliver Rath @ 2014-10-24 15:20 UTC (permalink / raw)
To: linux-lvm
Hi Mike!
Im using lvim 2.02.110 with kernel 3.16.
Am 24.10.2014 um 16:18 schrieb Mike Snitzer:
> On Fri, Oct 24 2014 at 10:08am -0400,
> You'll want to grow the metadata volume. Newer versions of the
> dm-thinp kernel code and lvm2 userspace code make this possible.
>
> See the "Manually manage free metadata space of a thin pool LV" section
> of the lvmthin.7 manpage in a recent lvm2 release.
Hour hint to the tmvthin-manpage was right:
lvextend -L+96M dmivg/winthinpool_tmeta
worked!
Now Metadata is safe:
--- Logical volume ---
LV Name winthinpool
VG Name dmivg
LV UUID gDGkhM-jGQq-hSuZ-g4CZ-JtcQ-JcdX-DqD7bi
LV Write Access read/write
LV Creation host, time dmicn20, 2014-10-07 23:54:45 +0200
LV Pool metadata winthinpool_tmeta
LV Pool data winthinpool_tdata
LV Status available
# open 73
LV Size 930,00 GiB
Allocated pool data 50,85%
Allocated metadata 45,91%
Current LE 238080
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:18
Tfh!
Oliver
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2014-10-24 15:20 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-10-24 14:08 [linux-lvm] What todo if metadata grows faster than pool data Oliver Rath
2014-10-24 14:18 ` Mike Snitzer
2014-10-24 15:20 ` [linux-lvm] SOLVED [was Re: What todo if metadata grows faster than pool data] Oliver Rath
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).