linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Testing ThinLVM metadata exhaustion
@ 2016-04-18 14:25 Gionatan Danti
  2016-04-22 13:12 ` Gionatan Danti
  0 siblings, 1 reply; 10+ messages in thread
From: Gionatan Danti @ 2016-04-18 14:25 UTC (permalink / raw)
  To: linux-lvm

Hi all,
I'm testing the various metadata exhaustion cases and how to cope with 
them. Specifically, I would like to fully understand what to expect 
after a metadata space exhaustion and the relative check/repair. To such 
extents, metadata autoresize is disabled.

I'm using a fully updated CentOS 6.7 x84_64 virtual machine, with a 
virtual disk (vdb) dedicated to the thin pool / volumes. This is what 
pvs reports:

PV         VG          Fmt  Attr PSize  PFree
/dev/vda2  vg_hvmaster lvm2 a--  63.51g    0
/dev/vdb   vgtest      lvm2 a--  32.00g    0

I did the following operations:
vgcreate vgtest /dev/vdb
lvcreate --thin vgtest/ThinPool -L 1G 	# 4MB tmeta
lvchange -Zn vgtest
lvcreate --thin vgtest/ThinPool --name ThinVol -V 32G
lvresize vgtest/ThinPool -l +100%FREE # 31.99GB, 4MB tmeta, not resized

With 64 KB chunks, the 4 MB tmeta volume is good for mapping ~8 GB, so 
any other writes trigger a metadata space exhaustion. Then, I did:

a) a first 8 GB write to almost fill the entire metadata space:
[root@hvmaster ~]# dd if=/dev/zero of=/dev/vgtest/ThinVol bs=1M count=8192
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 101.059 s, 85.0 MB/s
[root@hvmaster ~]# lvs -a
   LV               VG          Attr       LSize  Pool     Origin Data% 
  Meta%  Move Log Cpy%Sync Convert
   lv_root          vg_hvmaster -wi-ao---- 59.57g 

   lv_swap          vg_hvmaster -wi-ao----  3.94g 

   ThinPool         vgtest      twi-aot-M- 31.99g                 21.51 
  92.09
   [ThinPool_tdata] vgtest      Twi-ao---- 31.99g 

   [ThinPool_tmeta] vgtest      ewi-ao----  4.00m 

   ThinVol          vgtest      Vwi-a-t--- 32.00g ThinPool        23.26 

   [lvol0_pmspare]  vgtest      ewi-------  4.00m
[root@hvmaster ~]# thin_dump /dev/mapper/vgtest-ThinPool_tmeta
<superblock uuid="" time="0" transaction="1" data_block_size="128" 
nr_data_blocks="524096">
   <device dev_id="1" mapped_blocks="121968" transaction="0" 
creation_time="0" snap_time="0">
     <range_mapping origin_begin="0" data_begin="0" length="121968" 
time="0"/>
   </device>
</superblock>

b) a second non-synched 16 GB write to totally trash the tmeta volume:
# Second write
[root@hvmaster ~]# dd if=/dev/zero of=/dev/vgtest/ThinVol bs=1M count=8192
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 101.059 s, 85.0 MB/s
[root@hvmaster ~]# lvs -a
   LV               VG          Attr       LSize  Pool     Origin Data% 
  Meta%  Move Log Cpy%Sync Convert
   lv_root          vg_hvmaster -wi-ao---- 59.57g 

   lv_swap          vg_hvmaster -wi-ao----  3.94g 

   ThinPool         vgtest      twi-aot-M- 31.99g                 21.51 
  92.09
   [ThinPool_tdata] vgtest      Twi-ao---- 31.99g 

   [ThinPool_tmeta] vgtest      ewi-ao----  4.00m 

   ThinVol          vgtest      Vwi-a-t--- 32.00g ThinPool        23.26 

   [lvol0_pmspare]  vgtest      ewi-------  4.00m
[root@hvmaster ~]# thin_dump /dev/mapper/vgtest-ThinPool_tmeta
<superblock uuid="" time="0" transaction="1" data_block_size="128" 
nr_data_blocks="524096">
   <device dev_id="1" mapped_blocks="121968" transaction="0" 
creation_time="0" snap_time="0">
     <range_mapping origin_begin="0" data_begin="0" length="121968" 
time="0"/>
   </device>
</superblock>

c) a third, synched 16 GB write to see how the system behave with 
fsync-rich filling:
[root@hvmaster ~]# dd if=/dev/zero of=/dev/vgtest/ThinVol bs=1M 
count=16384 oflag=sync
dd: writing `/dev/vgtest/ThinVol': Input/output error
7624+0 records in
7623+0 records out
7993294848 bytes (8.0 GB) copied, 215.808 s, 37.0 MB/s
[root@hvmaster ~]# lvs -a
   Failed to parse thin params: Error.
   Failed to parse thin params: Error.
   Failed to parse thin params: Error.
   Failed to parse thin params: Error.
   LV               VG          Attr       LSize  Pool     Origin Data% 
  Meta%  Move Log Cpy%Sync Convert
   lv_root          vg_hvmaster -wi-ao---- 59.57g 

   lv_swap          vg_hvmaster -wi-ao----  3.94g 

   ThinPool         vgtest      twi-aot-M- 31.99g                 21.51 
  92.09
   [ThinPool_tdata] vgtest      Twi-ao---- 31.99g 

   [ThinPool_tmeta] vgtest      ewi-ao----  4.00m 

   ThinVol          vgtest      Vwi-a-t--- 32.00g ThinPool 

   [lvol0_pmspare]  vgtest      ewi-------  4.00m
[root@hvmaster ~]# thin_dump /dev/mapper/vgtest-ThinPool_tmeta
<superblock uuid="" time="0" transaction="1" data_block_size="128" 
nr_data_blocks="524096">
metadata contains errors (run thin_check for details).
perhaps you wanted to run with --repair

It is the last scenario (c) that puzzle me: rebooting the machine left 
the thinpool inactive and inactivable (as expected), but executing 
lvconvert --repair I can see that _all_ metadatas are gone (the pool 
seems empty). Is that the expected behavior?

Even more puzzling (for me) is that by skipping test a and b, and going 
directly for c, I have a different behavior: the metadata volume is 
(rightfully) completely filled, and the thin pool went in read-only 
mode. Again, it that the expected behavior?

Regards.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] Testing ThinLVM metadata exhaustion
  2016-04-18 14:25 [linux-lvm] Testing ThinLVM metadata exhaustion Gionatan Danti
@ 2016-04-22 13:12 ` Gionatan Danti
  2016-04-22 14:04   ` Zdenek Kabelac
  0 siblings, 1 reply; 10+ messages in thread
From: Gionatan Danti @ 2016-04-22 13:12 UTC (permalink / raw)
  To: linux-lvm

Il 18-04-2016 16:25 Gionatan Danti ha scritto:
> Hi all,
> I'm testing the various metadata exhaustion cases and how to cope with
> them. Specifically, I would like to fully understand what to expect
> after a metadata space exhaustion and the relative check/repair. To
> such extents, metadata autoresize is disabled.
> 
> I'm using a fully updated CentOS 6.7 x84_64 virtual machine, with a
> virtual disk (vdb) dedicated to the thin pool / volumes. This is what
> pvs reports:
> 
> PV         VG          Fmt  Attr PSize  PFree
> /dev/vda2  vg_hvmaster lvm2 a--  63.51g    0
> /dev/vdb   vgtest      lvm2 a--  32.00g    0
> 
> I did the following operations:
> vgcreate vgtest /dev/vdb
> lvcreate --thin vgtest/ThinPool -L 1G 	# 4MB tmeta
> lvchange -Zn vgtest
> lvcreate --thin vgtest/ThinPool --name ThinVol -V 32G
> lvresize vgtest/ThinPool -l +100%FREE # 31.99GB, 4MB tmeta, not resized
> 
> With 64 KB chunks, the 4 MB tmeta volume is good for mapping ~8 GB, so
> any other writes trigger a metadata space exhaustion. Then, I did:
> 
> a) a first 8 GB write to almost fill the entire metadata space:
> [root@hvmaster ~]# dd if=/dev/zero of=/dev/vgtest/ThinVol bs=1M 
> count=8192
> 8192+0 records in
> 8192+0 records out
> 8589934592 bytes (8.6 GB) copied, 101.059 s, 85.0 MB/s
> [root@hvmaster ~]# lvs -a
>   LV               VG          Attr       LSize  Pool     Origin Data%
>  Meta%  Move Log Cpy%Sync Convert
>   lv_root          vg_hvmaster -wi-ao---- 59.57g
> 
>   lv_swap          vg_hvmaster -wi-ao----  3.94g
> 
>   ThinPool         vgtest      twi-aot-M- 31.99g                 21.51  
> 92.09
>   [ThinPool_tdata] vgtest      Twi-ao---- 31.99g
> 
>   [ThinPool_tmeta] vgtest      ewi-ao----  4.00m
> 
>   ThinVol          vgtest      Vwi-a-t--- 32.00g ThinPool        23.26
> 
>   [lvol0_pmspare]  vgtest      ewi-------  4.00m
> [root@hvmaster ~]# thin_dump /dev/mapper/vgtest-ThinPool_tmeta
> <superblock uuid="" time="0" transaction="1" data_block_size="128"
> nr_data_blocks="524096">
>   <device dev_id="1" mapped_blocks="121968" transaction="0"
> creation_time="0" snap_time="0">
>     <range_mapping origin_begin="0" data_begin="0" length="121968" 
> time="0"/>
>   </device>
> </superblock>
> 
> b) a second non-synched 16 GB write to totally trash the tmeta volume:
> # Second write
> [root@hvmaster ~]# dd if=/dev/zero of=/dev/vgtest/ThinVol bs=1M 
> count=8192
> 8192+0 records in
> 8192+0 records out
> 8589934592 bytes (8.6 GB) copied, 101.059 s, 85.0 MB/s
> [root@hvmaster ~]# lvs -a
>   LV               VG          Attr       LSize  Pool     Origin Data%
>  Meta%  Move Log Cpy%Sync Convert
>   lv_root          vg_hvmaster -wi-ao---- 59.57g
> 
>   lv_swap          vg_hvmaster -wi-ao----  3.94g
> 
>   ThinPool         vgtest      twi-aot-M- 31.99g                 21.51  
> 92.09
>   [ThinPool_tdata] vgtest      Twi-ao---- 31.99g
> 
>   [ThinPool_tmeta] vgtest      ewi-ao----  4.00m
> 
>   ThinVol          vgtest      Vwi-a-t--- 32.00g ThinPool        23.26
> 
>   [lvol0_pmspare]  vgtest      ewi-------  4.00m
> [root@hvmaster ~]# thin_dump /dev/mapper/vgtest-ThinPool_tmeta
> <superblock uuid="" time="0" transaction="1" data_block_size="128"
> nr_data_blocks="524096">
>   <device dev_id="1" mapped_blocks="121968" transaction="0"
> creation_time="0" snap_time="0">
>     <range_mapping origin_begin="0" data_begin="0" length="121968" 
> time="0"/>
>   </device>
> </superblock>
> 
> c) a third, synched 16 GB write to see how the system behave with
> fsync-rich filling:
> [root@hvmaster ~]# dd if=/dev/zero of=/dev/vgtest/ThinVol bs=1M
> count=16384 oflag=sync
> dd: writing `/dev/vgtest/ThinVol': Input/output error
> 7624+0 records in
> 7623+0 records out
> 7993294848 bytes (8.0 GB) copied, 215.808 s, 37.0 MB/s
> [root@hvmaster ~]# lvs -a
>   Failed to parse thin params: Error.
>   Failed to parse thin params: Error.
>   Failed to parse thin params: Error.
>   Failed to parse thin params: Error.
>   LV               VG          Attr       LSize  Pool     Origin Data%
>  Meta%  Move Log Cpy%Sync Convert
>   lv_root          vg_hvmaster -wi-ao---- 59.57g
> 
>   lv_swap          vg_hvmaster -wi-ao----  3.94g
> 
>   ThinPool         vgtest      twi-aot-M- 31.99g                 21.51  
> 92.09
>   [ThinPool_tdata] vgtest      Twi-ao---- 31.99g
> 
>   [ThinPool_tmeta] vgtest      ewi-ao----  4.00m
> 
>   ThinVol          vgtest      Vwi-a-t--- 32.00g ThinPool
> 
>   [lvol0_pmspare]  vgtest      ewi-------  4.00m
> [root@hvmaster ~]# thin_dump /dev/mapper/vgtest-ThinPool_tmeta
> <superblock uuid="" time="0" transaction="1" data_block_size="128"
> nr_data_blocks="524096">
> metadata contains errors (run thin_check for details).
> perhaps you wanted to run with --repair
> 
> It is the last scenario (c) that puzzle me: rebooting the machine left
> the thinpool inactive and inactivable (as expected), but executing
> lvconvert --repair I can see that _all_ metadatas are gone (the pool
> seems empty). Is that the expected behavior?
> 
> Even more puzzling (for me) is that by skipping test a and b, and
> going directly for c, I have a different behavior: the metadata volume
> is (rightfully) completely filled, and the thin pool went in read-only
> mode. Again, it that the expected behavior?
> 
> Regards.

Hi all,
doing more tests I noticed that when "catastrophic" (non recoverable) 
metadata loss happens, dmesg logs the following lines:

device-mapper: block manager: validator mismatch (old=sm_bitmap vs 
new=btree_node) for block 429
device-mapper: space map common: unable to decrement a reference count 
below 0
device-mapper: thin: 253:4: metadata operation 'dm_thin_insert_block' 
failed: error = -22

During "normal" metadata exhaustion (when the pool can recover), the 
first two lines are not logged at all. Moreover, the third line reports 
error = -28, rather than error = -22 as above.

I also tested the latest RHEL 7.2 and I can not reproduce the error 
above: metadata exhaustion always seems to be managed in a graceful (ie: 
recoverable) manner.

I am missing something?
Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] Testing ThinLVM metadata exhaustion
  2016-04-22 13:12 ` Gionatan Danti
@ 2016-04-22 14:04   ` Zdenek Kabelac
  2016-04-23  8:40     ` Gionatan Danti
  0 siblings, 1 reply; 10+ messages in thread
From: Zdenek Kabelac @ 2016-04-22 14:04 UTC (permalink / raw)
  To: LVM general discussion and development

On 22.4.2016 15:12, Gionatan Danti wrote:
> Il 18-04-2016 16:25 Gionatan Danti ha scritto:
>> Hi all,
>> I'm testing the various metadata exhaustion cases and how to cope with
>> them. Specifically, I would like to fully understand what to expect
>> after a metadata space exhaustion and the relative check/repair. To
>> such extents, metadata autoresize is disabled.
>>
>> I'm using a fully updated CentOS 6.7 x84_64 virtual machine, with a
>> virtual disk (vdb) dedicated to the thin pool / volumes. This is what
>> pvs reports:
>>
>> PV         VG          Fmt  Attr PSize  PFree
>> /dev/vda2  vg_hvmaster lvm2 a--  63.51g    0
>> /dev/vdb   vgtest      lvm2 a--  32.00g    0
>>
>> I did the following operations:
>> vgcreate vgtest /dev/vdb
>> lvcreate --thin vgtest/ThinPool -L 1G     # 4MB tmeta
>> lvchange -Zn vgtest
>> lvcreate --thin vgtest/ThinPool --name ThinVol -V 32G
>> lvresize vgtest/ThinPool -l +100%FREE # 31.99GB, 4MB tmeta, not resized
>>
>> With 64 KB chunks, the 4 MB tmeta volume is good for mapping ~8 GB, so
>> any other writes trigger a metadata space exhaustion. Then, I did:
>>
>> a) a first 8 GB write to almost fill the entire metadata space:
>> [root@hvmaster ~]# dd if=/dev/zero of=/dev/vgtest/ThinVol bs=1M count=8192
>> 8192+0 records in
>> 8192+0 records out
>> 8589934592 bytes (8.6 GB) copied, 101.059 s, 85.0 MB/s
>> [root@hvmaster ~]# lvs -a
>>   LV               VG          Attr       LSize  Pool     Origin Data%
>>  Meta%  Move Log Cpy%Sync Convert
>>   lv_root          vg_hvmaster -wi-ao---- 59.57g
>>
>>   lv_swap          vg_hvmaster -wi-ao----  3.94g
>>
>>   ThinPool         vgtest      twi-aot-M- 31.99g                 21.51 92.09
>>   [ThinPool_tdata] vgtest      Twi-ao---- 31.99g
>>
>>   [ThinPool_tmeta] vgtest      ewi-ao----  4.00m
>>
>>   ThinVol          vgtest      Vwi-a-t--- 32.00g ThinPool        23.26
>>
>>   [lvol0_pmspare]  vgtest      ewi-------  4.00m
>> [root@hvmaster ~]# thin_dump /dev/mapper/vgtest-ThinPool_tmeta
>> <superblock uuid="" time="0" transaction="1" data_block_size="128"
>> nr_data_blocks="524096">
>>   <device dev_id="1" mapped_blocks="121968" transaction="0"
>> creation_time="0" snap_time="0">
>>     <range_mapping origin_begin="0" data_begin="0" length="121968" time="0"/>
>>   </device>
>> </superblock>
>>
>> b) a second non-synched 16 GB write to totally trash the tmeta volume:
>> # Second write
>> [root@hvmaster ~]# dd if=/dev/zero of=/dev/vgtest/ThinVol bs=1M count=8192
>> 8192+0 records in
>> 8192+0 records out
>> 8589934592 bytes (8.6 GB) copied, 101.059 s, 85.0 MB/s
>> [root@hvmaster ~]# lvs -a
>>   LV               VG          Attr       LSize  Pool     Origin Data%
>>  Meta%  Move Log Cpy%Sync Convert
>>   lv_root          vg_hvmaster -wi-ao---- 59.57g
>>
>>   lv_swap          vg_hvmaster -wi-ao----  3.94g
>>
>>   ThinPool         vgtest      twi-aot-M- 31.99g                 21.51 92.09
>>   [ThinPool_tdata] vgtest      Twi-ao---- 31.99g
>>
>>   [ThinPool_tmeta] vgtest      ewi-ao----  4.00m
>>
>>   ThinVol          vgtest      Vwi-a-t--- 32.00g ThinPool        23.26
>>
>>   [lvol0_pmspare]  vgtest      ewi-------  4.00m
>> [root@hvmaster ~]# thin_dump /dev/mapper/vgtest-ThinPool_tmeta
>> <superblock uuid="" time="0" transaction="1" data_block_size="128"
>> nr_data_blocks="524096">
>>   <device dev_id="1" mapped_blocks="121968" transaction="0"
>> creation_time="0" snap_time="0">
>>     <range_mapping origin_begin="0" data_begin="0" length="121968" time="0"/>
>>   </device>
>> </superblock>
>>
>> c) a third, synched 16 GB write to see how the system behave with
>> fsync-rich filling:
>> [root@hvmaster ~]# dd if=/dev/zero of=/dev/vgtest/ThinVol bs=1M
>> count=16384 oflag=sync
>> dd: writing `/dev/vgtest/ThinVol': Input/output error
>> 7624+0 records in
>> 7623+0 records out
>> 7993294848 bytes (8.0 GB) copied, 215.808 s, 37.0 MB/s
>> [root@hvmaster ~]# lvs -a
>>   Failed to parse thin params: Error.
>>   Failed to parse thin params: Error.
>>   Failed to parse thin params: Error.
>>   Failed to parse thin params: Error.
>>   LV               VG          Attr       LSize  Pool     Origin Data%
>>  Meta%  Move Log Cpy%Sync Convert
>>   lv_root          vg_hvmaster -wi-ao---- 59.57g
>>
>>   lv_swap          vg_hvmaster -wi-ao----  3.94g
>>
>>   ThinPool         vgtest      twi-aot-M- 31.99g                 21.51 92.09
>>   [ThinPool_tdata] vgtest      Twi-ao---- 31.99g
>>
>>   [ThinPool_tmeta] vgtest      ewi-ao----  4.00m
>>
>>   ThinVol          vgtest      Vwi-a-t--- 32.00g ThinPool
>>
>>   [lvol0_pmspare]  vgtest      ewi-------  4.00m
>> [root@hvmaster ~]# thin_dump /dev/mapper/vgtest-ThinPool_tmeta
>> <superblock uuid="" time="0" transaction="1" data_block_size="128"
>> nr_data_blocks="524096">
>> metadata contains errors (run thin_check for details).
>> perhaps you wanted to run with --repair
>>
>> It is the last scenario (c) that puzzle me: rebooting the machine left
>> the thinpool inactive and inactivable (as expected), but executing
>> lvconvert --repair I can see that _all_ metadatas are gone (the pool
>> seems empty). Is that the expected behavior?
>>
>> Even more puzzling (for me) is that by skipping test a and b, and
>> going directly for c, I have a different behavior: the metadata volume
>> is (rightfully) completely filled, and the thin pool went in read-only
>> mode. Again, it that the expected behavior?
>>
>> Regards.
>
> Hi all,
> doing more tests I noticed that when "catastrophic" (non recoverable) metadata
> loss happens, dmesg logs the following lines:
>
> device-mapper: block manager: validator mismatch (old=sm_bitmap vs
> new=btree_node) for block 429
> device-mapper: space map common: unable to decrement a reference count below 0
> device-mapper: thin: 253:4: metadata operation 'dm_thin_insert_block' failed:
> error = -22
>
> During "normal" metadata exhaustion (when the pool can recover), the first two
> lines are not logged at all. Moreover, the third line reports error = -28,
> rather than error = -22 as above.
>
> I also tested the latest RHEL 7.2 and I can not reproduce the error above:
> metadata exhaustion always seems to be managed in a graceful (ie: recoverable)
> manner.
>
> I am missing something?


I assume you miss newer kernel.
There was originally this bug.

Regards

Zdenek

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] Testing ThinLVM metadata exhaustion
  2016-04-22 14:04   ` Zdenek Kabelac
@ 2016-04-23  8:40     ` Gionatan Danti
  2016-04-25  8:59       ` Gionatan Danti
  0 siblings, 1 reply; 10+ messages in thread
From: Gionatan Danti @ 2016-04-23  8:40 UTC (permalink / raw)
  To: LVM general discussion and development

On 22/04/2016 16:04, Zdenek Kabelac wrote:
>
>
> I assume you miss newer kernel.
> There was originally this bug.
>
> Regards
>
> Zdenek

Hi Zdenek,
I am running CentOS 6.7 fully patched, kernel version 
2.6.32-573.22.1.el6.x86_64

Should I open a BZ report for it is RH aware of the problem on RH/CentOS 
6.7?

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] Testing ThinLVM metadata exhaustion
  2016-04-23  8:40     ` Gionatan Danti
@ 2016-04-25  8:59       ` Gionatan Danti
  2016-04-25  9:54         ` Zdenek Kabelac
  0 siblings, 1 reply; 10+ messages in thread
From: Gionatan Danti @ 2016-04-25  8:59 UTC (permalink / raw)
  To: LVM general discussion and development

Il 23-04-2016 10:40 Gionatan Danti ha scritto:
> On 22/04/2016 16:04, Zdenek Kabelac wrote:
>> 
>> 
>> I assume you miss newer kernel.
>> There was originally this bug.
>> 
>> Regards
>> 
>> Zdenek
> 
> Hi Zdenek,
> I am running CentOS 6.7 fully patched, kernel version 
> 2.6.32-573.22.1.el6.x86_64
> 
> Should I open a BZ report for it is RH aware of the problem on 
> RH/CentOS 6.7?
> 
> Thanks.

Hi,
sorry for the bump, by I really like to understand if current RHEL6 and 
RHEL7 kernels are affected by this serious bug and, if it is the case, 
if Red Hat is aware of that.

I understand this is not the better place to ask about a specific 
distribution, but I see many RedHat peoples here ;)

If this really is the wrong place, can someone point me to the right one 
(RH Bugzilla?).
Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] Testing ThinLVM metadata exhaustion
  2016-04-25  8:59       ` Gionatan Danti
@ 2016-04-25  9:54         ` Zdenek Kabelac
  2016-04-25 16:52           ` Gionatan Danti
  2016-04-26  7:11           ` Gionatan Danti
  0 siblings, 2 replies; 10+ messages in thread
From: Zdenek Kabelac @ 2016-04-25  9:54 UTC (permalink / raw)
  To: linux-lvm

On 25.4.2016 10:59, Gionatan Danti wrote:
> Il 23-04-2016 10:40 Gionatan Danti ha scritto:
>> On 22/04/2016 16:04, Zdenek Kabelac wrote:
>>>
>>>
>>> I assume you miss newer kernel.
>>> There was originally this bug.
>>>
>>> Regards
>>>
>>> Zdenek
>>
>> Hi Zdenek,
>> I am running CentOS 6.7 fully patched, kernel version
>> 2.6.32-573.22.1.el6.x86_64
>>
>> Should I open a BZ report for it is RH aware of the problem on RH/CentOS 6.7?
>>
>> Thanks.
>
> Hi,
> sorry for the bump, by I really like to understand if current RHEL6 and RHEL7
> kernels are affected by this serious bug and, if it is the case, if Red Hat is
> aware of that.
>
> I understand this is not the better place to ask about a specific
> distribution, but I see many RedHat peoples here ;)
>
> If this really is the wrong place, can someone point me to the right one (RH
> Bugzilla?).
> Thanks.
>

bugzilla.redhat.com

Anyway - 6.8 will likely be your solution.

Thin-provisioning is NOT supposed to be used at 'corner' cases - we improve 
them, but older version simply had more of them as there was always clearly 
communicated do not over-provision if you can't provide the space.

Out-of-space  is not equal if you run out of your filesystem space - you can't 
expect things will continue to work nicely - the cooperation of block layer 
with filesystem and metadata resilience are continually improved.

We have actually even seen users 'targeting' to hit full-pool as a part of 
regular work-flow - bad bad plan...

Regards

Zdenek

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] Testing ThinLVM metadata exhaustion
  2016-04-25  9:54         ` Zdenek Kabelac
@ 2016-04-25 16:52           ` Gionatan Danti
  2016-04-26  7:11           ` Gionatan Danti
  1 sibling, 0 replies; 10+ messages in thread
From: Gionatan Danti @ 2016-04-25 16:52 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Zdenek Kabelac

> 
> bugzilla.redhat.com
> 
> Anyway - 6.8 will likely be your solution.
> 
> Thin-provisioning is NOT supposed to be used at 'corner' cases - we
> improve them, but older version simply had more of them as there was
> always clearly communicated do not over-provision if you can't provide
> the space.
> 
> Out-of-space  is not equal if you run out of your filesystem space -
> you can't expect things will continue to work nicely - the cooperation
> of block layer with filesystem and metadata resilience are continually
> improved.
> 
> We have actually even seen users 'targeting' to hit full-pool as a
> part of regular work-flow - bad bad plan...
> 
> Regards
> 
> Zdenek

Hi Zdenek,
thanks for your courtesy.

I absolutely agree with you that in no case metadata exhaustion can be 
considered part of a "regular work-flow". At the same time, I often do 
"stress test" specifically crafted to put the software/hardware in the 
worst possible condition. In this manner, should an exceptionally bad 
situation occour, I know how to deal with it.

I have another question: does this bug only happen when metadata space 
is exausted? I am asking this because searching for other peoples with 
the same error message, I read this bug report: 
http://www.redhat.com/archives/dm-devel/2014-March/msg00021.html

The bug described in the message above does not necessarily happen at 
metadata exaustion time, as confirmed here: 
https://bugzilla.kernel.org/show_bug.cgi?id=68801

I understand that these are old (2014) bugs and that were fixed in Linux 
3.14 but, using thin LVM volumes in production systems (albeit with RH 7 
only), I want to be reasonably sure that no show-stopper bug can hit me. 
Are current RH OSes (6.7 and 7.2) immune from this bug (metadata 
corruption even if tmeta is not full)?

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] Testing ThinLVM metadata exhaustion
  2016-04-25  9:54         ` Zdenek Kabelac
  2016-04-25 16:52           ` Gionatan Danti
@ 2016-04-26  7:11           ` Gionatan Danti
  2016-04-27 11:11             ` Gionatan Danti
  1 sibling, 1 reply; 10+ messages in thread
From: Gionatan Danti @ 2016-04-26  7:11 UTC (permalink / raw)
  To: LVM general discussion and development

> bugzilla.redhat.com
>
> Anyway - 6.8 will likely be your solution.
>
> Thin-provisioning is NOT supposed to be used at 'corner' cases - we
> improve them, but older version simply had more of them as there was
> always clearly communicated do not over-provision if you can't provide
> the space.
>
> Out-of-space  is not equal if you run out of your filesystem space - you
> can't expect things will continue to work nicely - the cooperation of
> block layer with filesystem and metadata resilience are continually
> improved.
>
> We have actually even seen users 'targeting' to hit full-pool as a part
> of regular work-flow - bad bad plan...
>
> Regards
>
> Zdenek

[reposting due to sender error]

Hi Zdenek,
thanks for your courtesy.

I absolutely agree with you that in no case metadata exhaustion can be 
considered part of a "regular work-flow". At the same time, I often do 
"stress test" specifically crafted to put the software/hardware in the 
worst possible condition. In this manner, should an exceptionally bad 
situation occour, I know how to deal with it.

I have another question: does this bug only happen when metadata space 
is exausted? I am asking this because searching for other peoples with 
the same error message, I read this bug report: 
http://www.redhat.com/archives/dm-devel/2014-March/msg00021.html

The bug described in the message above does not necessarily happen at 
metadata exaustion time, as confirmed here: 
https://bugzilla.kernel.org/show_bug.cgi?id=68801

I understand that these are old (2014) bugs and that were fixed in Linux 
3.14 but, using thin LVM volumes in production systems (albeit with RH 7 
only), I want to be reasonably sure that no show-stopper bug can hit me. 
Are current RH OSes (6.7 and 7.2) immune from this bug (metadata 
corruption even if tmeta is not full)?

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] Testing ThinLVM metadata exhaustion
  2016-04-26  7:11           ` Gionatan Danti
@ 2016-04-27 11:11             ` Gionatan Danti
  2016-05-03 10:05               ` Gionatan Danti
  0 siblings, 1 reply; 10+ messages in thread
From: Gionatan Danti @ 2016-04-27 11:11 UTC (permalink / raw)
  To: LVM general discussion and development

>
> I absolutely agree with you that in no case metadata exhaustion can be
> considered part of a "regular work-flow". At the same time, I often do
> "stress test" specifically crafted to put the software/hardware in the
> worst possible condition. In this manner, should an exceptionally bad
> situation occour, I know how to deal with it.
>
> I have another question: does this bug only happen when metadata space
> is exausted? I am asking this because searching for other peoples with
> the same error message, I read this bug report:
> http://www.redhat.com/archives/dm-devel/2014-March/msg00021.html
>
> The bug described in the message above does not necessarily happen at
> metadata exaustion time, as confirmed here:
> https://bugzilla.kernel.org/show_bug.cgi?id=68801
>
> I understand that these are old (2014) bugs and that were fixed in Linux
> 3.14 but, using thin LVM volumes in production systems (albeit with RH 7
> only), I want to be reasonably sure that no show-stopper bug can hit me.
> Are current RH OSes (6.7 and 7.2) immune from this bug (metadata
> corruption even if tmeta is not full)?
>
> Thanks.
>

Hi all,
any thoughts on that?

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] Testing ThinLVM metadata exhaustion
  2016-04-27 11:11             ` Gionatan Danti
@ 2016-05-03 10:05               ` Gionatan Danti
  0 siblings, 0 replies; 10+ messages in thread
From: Gionatan Danti @ 2016-05-03 10:05 UTC (permalink / raw)
  To: LVM general discussion and development

On 27/04/2016 13:11, Gionatan Danti wrote:
>>
>> I absolutely agree with you that in no case metadata exhaustion can be
>> considered part of a "regular work-flow". At the same time, I often do
>> "stress test" specifically crafted to put the software/hardware in the
>> worst possible condition. In this manner, should an exceptionally bad
>> situation occour, I know how to deal with it.
>>
>> I have another question: does this bug only happen when metadata space
>> is exausted? I am asking this because searching for other peoples with
>> the same error message, I read this bug report:
>> http://www.redhat.com/archives/dm-devel/2014-March/msg00021.html
>>
>> The bug described in the message above does not necessarily happen at
>> metadata exaustion time, as confirmed here:
>> https://bugzilla.kernel.org/show_bug.cgi?id=68801
>>
>> I understand that these are old (2014) bugs and that were fixed in Linux
>> 3.14 but, using thin LVM volumes in production systems (albeit with RH 7
>> only), I want to be reasonably sure that no show-stopper bug can hit me.
>> Are current RH OSes (6.7 and 7.2) immune from this bug (metadata
>> corruption even if tmeta is not full)?
>>
>> Thanks.

Hi all,
sorry for the bump, but I would really like to have a more precise 
understanding of the quoted bug, specifically:

- if it presents itself on metadata-exhausted volumes only;
- if it can bite on non-full tmeta, are RH 6.7 and 7.2 vulnerable to it?

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-05-03 10:10 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-04-18 14:25 [linux-lvm] Testing ThinLVM metadata exhaustion Gionatan Danti
2016-04-22 13:12 ` Gionatan Danti
2016-04-22 14:04   ` Zdenek Kabelac
2016-04-23  8:40     ` Gionatan Danti
2016-04-25  8:59       ` Gionatan Danti
2016-04-25  9:54         ` Zdenek Kabelac
2016-04-25 16:52           ` Gionatan Danti
2016-04-26  7:11           ` Gionatan Danti
2016-04-27 11:11             ` Gionatan Danti
2016-05-03 10:05               ` Gionatan Danti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).