From: Marian Csontos <mcsontos@redhat.com>
To: LVM general discussion and development <linux-lvm@redhat.com>
Cc: g.danti@assyoma.it
Subject: Re: [linux-lvm] LVM and I/O block size (max_sectors_kb)
Date: Fri, 12 Sep 2014 13:54:51 +0200 [thread overview]
Message-ID: <5412DF0B.40706@redhat.com> (raw)
In-Reply-To: <ec2642484312b60a76e570e45a4f2bc9@assyoma.it>
On 09/11/2014 03:49 PM, Gionatan Danti wrote:
>> Hi all,
>> it seems that, when using LVM, I/O block transfer size has an hard
>> limit at about 512 KB/iop. Large I/O transfers can be crucial for
>> performance, so I am trying to understand if I can change that.
>>
>> Some info: uname -a (CentOS 6.5 x86_64)
>> Linux blackhole.assyoma.it 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun
>> 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>>
>> From my understanding, maximum I/O block size for physical device can
>> be tuned from /sys/block/sdX/queue/max_sectors_kb.
>>
>> Some quick iostat -k -x 1 show that, for physical device, the tunable
>> works:
>>
>> # max_sectors_kb=512 (default)
>> dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
>> iostat -x -k 1 /dev/sdd: avgrq-sz=1024 (1024 sectors = 512KB =
>> max_sectors_kb)
>>
>> # max_sectors_kb=2048
>> dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
>> iostat -x -k 1 /dev/sdd: avgrq-sz=2048 (2048 sectors = 1024 KB)
>>
>> dd if=/dev/zero of=/dev/sdd1 bs=4M count=16 oflag=direct
>> iostat -x -k 1 /dev/sdd: avgrq-sz=2730.67 (2730 sectors = ~1350 KB)
>>
>> As you can see, I can't always reach 100% efficienty but I came
>> reasonably close.
>>
>>
>> The problem is that, when using LVM2 (on software raid10), I can not
>> really increase I/O transfer size. Setting both sd*, md* and dm-* to
>> max_sectors_kb=2048 leads to _no_ increase in I/O blocks, while
>> decreasing the max_sectors_kb works properly:
>>
>> # max_sectors_kb=2048
>> dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
>> iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=1024.00 (it remains as 512KB)
>>
>> # max_sectors_kb=256
>> dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
>> iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=512.00 (512 sectors = 256KB
>> = max_sectors_kb)
>>
>> Now, two questions:
>> 1) why I see this hard limit?
1. As there are more disks in the VG, can them all work with larger
max_sectors_kb?
2. Does it work for md device without stacking LVM on top of that?
3. What's your Physical extent size?
vgs -ovg_extent_size vg_kvm # or simple vgs -v
4. If all above is fine then it may be related to LVM.
-- Marian
>> 2) can I change this behavior?
>>
>> Thank you very much.
>
> Hi all,
> sorry for bumping up this thread.
>
> Anyone has an explanation for what I see? I am missing something?
>
> Thanks.
>
next prev parent reply other threads:[~2014-09-12 11:54 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-02 15:00 [linux-lvm] LVM and I/O block size (max_sectors_kb) Gionatan Danti
2014-07-31 7:30 ` Gionatan Danti
2014-09-11 13:49 ` Gionatan Danti
2014-09-12 11:54 ` Marian Csontos [this message]
2014-09-12 13:46 ` Gionatan Danti
2014-09-12 15:48 ` matthew patton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5412DF0B.40706@redhat.com \
--to=mcsontos@redhat.com \
--cc=g.danti@assyoma.it \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).