linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Possible bug in thin metadata size with Linux MDRAID
@ 2017-03-08 16:14 Gionatan Danti
  2017-03-08 18:55 ` Zdenek Kabelac
  0 siblings, 1 reply; 12+ messages in thread
From: Gionatan Danti @ 2017-03-08 16:14 UTC (permalink / raw)
  To: linux-lvm

Hi list,
I would like to understand if this is a lvmthin metadata size bug of if 
I am simply missing something.

These are my system specs:
- CentOS 7.3 64 bit with kernel 3.10.0-514.6.1.el7
- LVM version 2.02.166-1.el7_3.2
- two linux software RAID device, md127 (root) and md126 (storage)

MD array specs (the interesting one is md126)
Personalities : [raid10]
md126 : active raid10 sdd2[3] sda3[0] sdb2[1] sdc2[2]
       557632000 blocks super 1.2 128K chunks 2 near-copies [4/4] [UUUU]
       bitmap: 1/5 pages [4KB], 65536KB chunk

md127 : active raid10 sdc1[2] sda2[0] sdd1[3] sdb1[1]
       67178496 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
       bitmap: 0/1 pages [0KB], 65536KB chunk

As you can see, /dev/md126 has a 128KB chunk size. I used this device to 
host a physical volume and volume group on which I created a thinpool of 
512GB. Then, I create a thin logical volume of the same size (512 GB) 
and started to fill it. Somewhere near (but not at) the full capacity, I 
saw the volume offline due to metadata exhaustion.

Let see how the logical volume was created and how it appear:
[root@blackhole ]# lvcreate --thin vg_kvm/thinpool -L 512G; lvs -a -o 
+chunk_size
   Using default stripesize 64.00 KiB.
   Logical volume "thinpool" created.
   LV               VG        Attr       LSize   Pool Origin Data% 
Meta%  Move Log Cpy%Sync Convert Chunk
   [lvol0_pmspare]  vg_kvm    ewi------- 128.00m 
                                  0
   thinpool         vg_kvm    twi-a-tz-- 512.00g             0.00   0.83 
                             128.00k
   [thinpool_tdata] vg_kvm    Twi-ao---- 512.00g 
                                  0
   [thinpool_tmeta] vg_kvm    ewi-ao---- 128.00m 
                                  0
   root             vg_system -wi-ao----  50.00g 
                                  0
   swap             vg_system -wi-ao----   7.62g 
                                  0

The metadata volume is quite smaller (~2x) than I expected, and not big 
enough to reach 100% data utilization. Indeed, thin_metadata_size show a 
minimum metadata volume size of over 130 MB:

[root@blackhole ]# thin_metadata_size -b 128k -s 512g -m 1 -u m
thin_metadata_size - 130.04 mebibytes estimated metadata area size for 
"--block-size=128kibibytes --pool-size=512gibibytes --max-thins=1"

Now, the interesting thing: by explicitly setting --chunksize=128, the 
metadata volume is 2x bigger (and in line with my expectations):
[root@blackhole ]# lvcreate --thin vg_kvm/thinpool -L 512G 
--chunksize=128; lvs -a -o +chunk_size
   Using default stripesize 64.00 KiB.
   Logical volume "thinpool" created.
   LV               VG        Attr       LSize   Pool Origin Data% 
Meta%  Move Log Cpy%Sync Convert Chunk
   [lvol0_pmspare]  vg_kvm    ewi------- 256.00m 
                                  0
   thinpool         vg_kvm    twi-a-tz-- 512.00g             0.00   0.42 
                             128.00k
   [thinpool_tdata] vg_kvm    Twi-ao---- 512.00g 
                                  0
   [thinpool_tmeta] vg_kvm    ewi-ao---- 256.00m 
                                  0
   root             vg_system -wi-ao----  50.00g 
                                  0
   swap             vg_system -wi-ao----   7.62g 
                                  0

Why I saw two very different metadata volume sizes? Chunksize was 128 KB 
in both cases; the only difference is that I explicitly specified it on 
the command line...

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-03-20 14:25 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-03-08 16:14 [linux-lvm] Possible bug in thin metadata size with Linux MDRAID Gionatan Danti
2017-03-08 18:55 ` Zdenek Kabelac
2017-03-09 11:24   ` Gionatan Danti
2017-03-09 11:53     ` Zdenek Kabelac
2017-03-09 15:33       ` Gionatan Danti
2017-03-20  9:47         ` Gionatan Danti
2017-03-20  9:51           ` Zdenek Kabelac
2017-03-20 10:45             ` Gionatan Danti
2017-03-20 11:01               ` Zdenek Kabelac
2017-03-20 11:52                 ` Gionatan Danti
2017-03-20 13:57                   ` Zdenek Kabelac
2017-03-20 14:25                     ` Gionatan Danti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).