linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] LVM and I/O block size (max_sectors_kb)
@ 2014-07-02 15:00 Gionatan Danti
  2014-07-31  7:30 ` Gionatan Danti
  2014-09-11 13:49 ` Gionatan Danti
  0 siblings, 2 replies; 6+ messages in thread
From: Gionatan Danti @ 2014-07-02 15:00 UTC (permalink / raw)
  To: linux-lvm; +Cc: g.danti

Hi all,
it seems that, when using LVM, I/O block transfer size has an hard limit 
at about 512 KB/iop. Large I/O transfers can be crucial for performance, 
so I am trying to understand if I can change that.

Some info: uname -a (CentOS 6.5 x86_64)
Linux blackhole.assyoma.it 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 
21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

 From my understanding, maximum I/O block size for physical device can 
be tuned from /sys/block/sdX/queue/max_sectors_kb.

Some quick iostat -k -x 1 show that, for physical device, the tunable works:

# max_sectors_kb=512 (default)
dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
iostat -x -k 1 /dev/sdd: avgrq-sz=1024 (1024 sectors = 512KB = 
max_sectors_kb)

# max_sectors_kb=2048
dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
iostat -x -k 1 /dev/sdd: avgrq-sz=2048 (2048 sectors = 1024 KB)

dd if=/dev/zero of=/dev/sdd1 bs=4M count=16 oflag=direct
iostat -x -k 1 /dev/sdd: avgrq-sz=2730.67 (2730 sectors = ~1350 KB)

As you can see, I can't always reach 100% efficienty but I came 
reasonably close.


The problem is that, when using LVM2 (on software raid10), I can not 
really increase I/O transfer size. Setting both sd*, md* and dm-* to 
max_sectors_kb=2048 leads to _no_ increase in I/O blocks, while 
decreasing the max_sectors_kb works properly:

# max_sectors_kb=2048
dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=1024.00 (it remains as 512KB)

# max_sectors_kb=256
dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=512.00 (512 sectors = 256KB = 
max_sectors_kb)

Now, two questions:
1) why I see this hard limit?
2) can I change this behavior?

Thank you very much.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] LVM and I/O block size (max_sectors_kb)
  2014-07-02 15:00 [linux-lvm] LVM and I/O block size (max_sectors_kb) Gionatan Danti
@ 2014-07-31  7:30 ` Gionatan Danti
  2014-09-11 13:49 ` Gionatan Danti
  1 sibling, 0 replies; 6+ messages in thread
From: Gionatan Danti @ 2014-07-31  7:30 UTC (permalink / raw)
  To: linux-lvm; +Cc: g.danti

On 02/07/2014 17:00, Gionatan Danti wrote:
> Hi all,
> it seems that, when using LVM, I/O block transfer size has an hard limit
> at about 512 KB/iop. Large I/O transfers can be crucial for performance,
> so I am trying to understand if I can change that.
>
> Some info: uname -a (CentOS 6.5 x86_64)
> Linux blackhole.assyoma.it 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19
> 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>
>  From my understanding, maximum I/O block size for physical device can
> be tuned from /sys/block/sdX/queue/max_sectors_kb.
>
> Some quick iostat -k -x 1 show that, for physical device, the tunable
> works:
>
> # max_sectors_kb=512 (default)
> dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
> iostat -x -k 1 /dev/sdd: avgrq-sz=1024 (1024 sectors = 512KB =
> max_sectors_kb)
>
> # max_sectors_kb=2048
> dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
> iostat -x -k 1 /dev/sdd: avgrq-sz=2048 (2048 sectors = 1024 KB)
>
> dd if=/dev/zero of=/dev/sdd1 bs=4M count=16 oflag=direct
> iostat -x -k 1 /dev/sdd: avgrq-sz=2730.67 (2730 sectors = ~1350 KB)
>
> As you can see, I can't always reach 100% efficienty but I came
> reasonably close.
>
>
> The problem is that, when using LVM2 (on software raid10), I can not
> really increase I/O transfer size. Setting both sd*, md* and dm-* to
> max_sectors_kb=2048 leads to _no_ increase in I/O blocks, while
> decreasing the max_sectors_kb works properly:
>
> # max_sectors_kb=2048
> dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
> iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=1024.00 (it remains as 512KB)
>
> # max_sectors_kb=256
> dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
> iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=512.00 (512 sectors = 256KB =
> max_sectors_kb)
>
> Now, two questions:
> 1) why I see this hard limit?
> 2) can I change this behavior?
>
> Thank you very much.
>

Anyone with some ideas? :)

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] LVM and I/O block size (max_sectors_kb)
  2014-07-02 15:00 [linux-lvm] LVM and I/O block size (max_sectors_kb) Gionatan Danti
  2014-07-31  7:30 ` Gionatan Danti
@ 2014-09-11 13:49 ` Gionatan Danti
  2014-09-12 11:54   ` Marian Csontos
  1 sibling, 1 reply; 6+ messages in thread
From: Gionatan Danti @ 2014-09-11 13:49 UTC (permalink / raw)
  To: linux-lvm; +Cc: g.danti

> Hi all,
> it seems that, when using LVM, I/O block transfer size has an hard
> limit at about 512 KB/iop. Large I/O transfers can be crucial for
> performance, so I am trying to understand if I can change that.
> 
> Some info: uname -a (CentOS 6.5 x86_64)
> Linux blackhole.assyoma.it 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun
> 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
> 
> From my understanding, maximum I/O block size for physical device can
> be tuned from /sys/block/sdX/queue/max_sectors_kb.
> 
> Some quick iostat -k -x 1 show that, for physical device, the tunable 
> works:
> 
> # max_sectors_kb=512 (default)
> dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
> iostat -x -k 1 /dev/sdd: avgrq-sz=1024 (1024 sectors = 512KB = 
> max_sectors_kb)
> 
> # max_sectors_kb=2048
> dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
> iostat -x -k 1 /dev/sdd: avgrq-sz=2048 (2048 sectors = 1024 KB)
> 
> dd if=/dev/zero of=/dev/sdd1 bs=4M count=16 oflag=direct
> iostat -x -k 1 /dev/sdd: avgrq-sz=2730.67 (2730 sectors = ~1350 KB)
> 
> As you can see, I can't always reach 100% efficienty but I came
> reasonably close.
> 
> 
> The problem is that, when using LVM2 (on software raid10), I can not
> really increase I/O transfer size. Setting both sd*, md* and dm-* to
> max_sectors_kb=2048 leads to _no_ increase in I/O blocks, while
> decreasing the max_sectors_kb works properly:
> 
> # max_sectors_kb=2048
> dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
> iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=1024.00 (it remains as 512KB)
> 
> # max_sectors_kb=256
> dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
> iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=512.00 (512 sectors = 256KB
> = max_sectors_kb)
> 
> Now, two questions:
> 1) why I see this hard limit?
> 2) can I change this behavior?
> 
> Thank you very much.

Hi all,
sorry for bumping up this thread.

Anyone has an explanation for what I see? I am missing something?

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] LVM and I/O block size (max_sectors_kb)
  2014-09-11 13:49 ` Gionatan Danti
@ 2014-09-12 11:54   ` Marian Csontos
  2014-09-12 13:46     ` Gionatan Danti
  2014-09-12 15:48     ` matthew patton
  0 siblings, 2 replies; 6+ messages in thread
From: Marian Csontos @ 2014-09-12 11:54 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: g.danti

On 09/11/2014 03:49 PM, Gionatan Danti wrote:
>> Hi all,
>> it seems that, when using LVM, I/O block transfer size has an hard
>> limit at about 512 KB/iop. Large I/O transfers can be crucial for
>> performance, so I am trying to understand if I can change that.
>>
>> Some info: uname -a (CentOS 6.5 x86_64)
>> Linux blackhole.assyoma.it 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun
>> 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>>
>> From my understanding, maximum I/O block size for physical device can
>> be tuned from /sys/block/sdX/queue/max_sectors_kb.
>>
>> Some quick iostat -k -x 1 show that, for physical device, the tunable
>> works:
>>
>> # max_sectors_kb=512 (default)
>> dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
>> iostat -x -k 1 /dev/sdd: avgrq-sz=1024 (1024 sectors = 512KB =
>> max_sectors_kb)
>>
>> # max_sectors_kb=2048
>> dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
>> iostat -x -k 1 /dev/sdd: avgrq-sz=2048 (2048 sectors = 1024 KB)
>>
>> dd if=/dev/zero of=/dev/sdd1 bs=4M count=16 oflag=direct
>> iostat -x -k 1 /dev/sdd: avgrq-sz=2730.67 (2730 sectors = ~1350 KB)
>>
>> As you can see, I can't always reach 100% efficienty but I came
>> reasonably close.
>>
>>
>> The problem is that, when using LVM2 (on software raid10), I can not
>> really increase I/O transfer size. Setting both sd*, md* and dm-* to
>> max_sectors_kb=2048 leads to _no_ increase in I/O blocks, while
>> decreasing the max_sectors_kb works properly:
>>
>> # max_sectors_kb=2048
>> dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
>> iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=1024.00 (it remains as 512KB)
>>
>> # max_sectors_kb=256
>> dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
>> iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=512.00 (512 sectors = 256KB
>> = max_sectors_kb)
>>
>> Now, two questions:
>> 1) why I see this hard limit?

1. As there are more disks in the VG, can them all work with larger 
max_sectors_kb?

2. Does it work for md device without stacking LVM on top of that?

3. What's your Physical extent size?

     vgs -ovg_extent_size vg_kvm # or simple vgs -v

4. If all above is fine then it may be related to LVM.

-- Marian

>> 2) can I change this behavior?
>>
>> Thank you very much.
>
> Hi all,
> sorry for bumping up this thread.
>
> Anyone has an explanation for what I see? I am missing something?
>
> Thanks.
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] LVM and I/O block size (max_sectors_kb)
  2014-09-12 11:54   ` Marian Csontos
@ 2014-09-12 13:46     ` Gionatan Danti
  2014-09-12 15:48     ` matthew patton
  1 sibling, 0 replies; 6+ messages in thread
From: Gionatan Danti @ 2014-09-12 13:46 UTC (permalink / raw)
  To: Marian Csontos; +Cc: g.danti, LVM general discussion and development

Hi Marian,

> 1. As there are more disks in the VG, can them all work with larger
> max_sectors_kb?

All four disks and relative MD device has max_sectors_kb set to the 
required (large) 2048 value.
They all are recent (circa 2010) SATA disks, albeit from different 
vendors.

> 2. Does it work for md device without stacking LVM on top of that?

I did not try to directly use the MD device. Let me test this and I will 
report here.

> 3. What's your Physical extent size?
> 
>     vgs -ovg_extent_size vg_kvm # or simple vgs -v

PE size is at default (4MiB).

> 4. If all above is fine then it may be related to LVM.

I think so, because I did some test with a _single_ disk (an SSD, 
actually) with and without LVM on top. Without LVM, I see the normal and 
expected behavior - IO transfer size increase with max_sectors_kb. With 
LVM on top, I see the exact same behavior I am reporting: max IO 
transfer size seems capped at 512 KB, regardless the max_sectors_kb 
setting.

It really seems something related to LVM layer.

> -- Marian

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] LVM and I/O block size (max_sectors_kb)
  2014-09-12 11:54   ` Marian Csontos
  2014-09-12 13:46     ` Gionatan Danti
@ 2014-09-12 15:48     ` matthew patton
  1 sibling, 0 replies; 6+ messages in thread
From: matthew patton @ 2014-09-12 15:48 UTC (permalink / raw)
  To: LVM general discussion and development

my understanding was that this is an OLD problem (as designed?) Conceivably LVM could be written to be smart enough to query all of the underlying PV and arrive at the lowest common value <= PhysicalExtent-Size. But it was probably more expedient to just hard code it to the native BIO value.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-09-12 15:48 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-02 15:00 [linux-lvm] LVM and I/O block size (max_sectors_kb) Gionatan Danti
2014-07-31  7:30 ` Gionatan Danti
2014-09-11 13:49 ` Gionatan Danti
2014-09-12 11:54   ` Marian Csontos
2014-09-12 13:46     ` Gionatan Danti
2014-09-12 15:48     ` matthew patton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).