linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Gionatan Danti <g.danti@assyoma.it>
To: linux-lvm@redhat.com
Cc: g.danti@assyoma.it
Subject: Re: [linux-lvm] LVM and I/O block size (max_sectors_kb)
Date: Thu, 31 Jul 2014 09:30:18 +0200	[thread overview]
Message-ID: <53D9F08A.5050503@assyoma.it> (raw)
In-Reply-To: <53B41EA2.3010203@assyoma.it>

On 02/07/2014 17:00, Gionatan Danti wrote:
> Hi all,
> it seems that, when using LVM, I/O block transfer size has an hard limit
> at about 512 KB/iop. Large I/O transfers can be crucial for performance,
> so I am trying to understand if I can change that.
>
> Some info: uname -a (CentOS 6.5 x86_64)
> Linux blackhole.assyoma.it 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19
> 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>
>  From my understanding, maximum I/O block size for physical device can
> be tuned from /sys/block/sdX/queue/max_sectors_kb.
>
> Some quick iostat -k -x 1 show that, for physical device, the tunable
> works:
>
> # max_sectors_kb=512 (default)
> dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
> iostat -x -k 1 /dev/sdd: avgrq-sz=1024 (1024 sectors = 512KB =
> max_sectors_kb)
>
> # max_sectors_kb=2048
> dd if=/dev/zero of=/dev/sdd1 bs=2M count=16 oflag=direct
> iostat -x -k 1 /dev/sdd: avgrq-sz=2048 (2048 sectors = 1024 KB)
>
> dd if=/dev/zero of=/dev/sdd1 bs=4M count=16 oflag=direct
> iostat -x -k 1 /dev/sdd: avgrq-sz=2730.67 (2730 sectors = ~1350 KB)
>
> As you can see, I can't always reach 100% efficienty but I came
> reasonably close.
>
>
> The problem is that, when using LVM2 (on software raid10), I can not
> really increase I/O transfer size. Setting both sd*, md* and dm-* to
> max_sectors_kb=2048 leads to _no_ increase in I/O blocks, while
> decreasing the max_sectors_kb works properly:
>
> # max_sectors_kb=2048
> dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
> iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=1024.00 (it remains as 512KB)
>
> # max_sectors_kb=256
> dd if=/dev/zero of=/dev/vg_kvm/TEST bs=2M count=64 oflag=direct
> iostat -x -k 1 /dev/vg_kvm/TEST: avgrq-sz=512.00 (512 sectors = 256KB =
> max_sectors_kb)
>
> Now, two questions:
> 1) why I see this hard limit?
> 2) can I change this behavior?
>
> Thank you very much.
>

Anyone with some ideas? :)

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

  reply	other threads:[~2014-07-31  7:30 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-02 15:00 [linux-lvm] LVM and I/O block size (max_sectors_kb) Gionatan Danti
2014-07-31  7:30 ` Gionatan Danti [this message]
2014-09-11 13:49 ` Gionatan Danti
2014-09-12 11:54   ` Marian Csontos
2014-09-12 13:46     ` Gionatan Danti
2014-09-12 15:48     ` matthew patton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53D9F08A.5050503@assyoma.it \
    --to=g.danti@assyoma.it \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).