qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Asias He <asias@redhat.com>
To: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Cc: target-devel@vger.kernel.org,
	Stefan Hajnoczi <stefanha@gmail.com>,
	linuxram@us.ibm.com, qemu-devel@nongnu.org,
	"Nicholas A. Bellinger" <nab@linux-iscsi.org>,
	Cong Meng <mc@linux.vnet.ibm.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] IO performance test on the tcm-vhost scsi
Date: Fri, 15 Jun 2012 11:28:20 +0800	[thread overview]
Message-ID: <4FDAABD4.5060901@redhat.com> (raw)
In-Reply-To: <20120614120722.GA7128@stefanha-thinkpad.localdomain>

On 06/14/2012 08:07 PM, Stefan Hajnoczi wrote:
> On Thu, Jun 14, 2012 at 05:45:22PM +0800, Cong Meng wrote:
>> On Thu, 2012-06-14 at 09:30 +0100, Stefan Hajnoczi wrote:
>>> On Wed, Jun 13, 2012 at 11:13 AM, mengcong <mc@linux.vnet.ibm.com> wrote:
>>>>                     seq-read        seq-write       rand-read     rand-write
>>>>                     8k     256k     8k     256k     8k   256k     8k   256k
>>>> ----------------------------------------------------------------------------
>>>> bare-metal          67951  69802    67064  67075    1758 29284    1969 26360
>>>> tcm-vhost-iblock    61501  66575    51775  67872    1011 22533    1851 28216
>>>> tcm-vhost-pscsi     66479  68191    50873  67547    1008 22523    1818 28304
>>>> virtio-blk          26284  66737    23373  65735    1724 28962    1805 27774
>>>> scsi-disk           36013  60289    46222  62527    1663 12992    1804 27670
>>>
>>>>
>>>> unit: KB/s
>>>> seq-read/write = sequential read/write
>>>> rand-read/write = random read/write
>>>> 8k,256k are blocksize of the IO
>>>
>>> What strikes me is how virtio-blk performs significantly worse than
>>> bare metal and tcm_vhost for seq-read/seq-write 8k.  The good
>>> tcm_vhost results suggest that the overhead is not the virtio
>>> interface itself, since tcm_vhost implements virtio-scsi.
>>>
>>> To drill down on the tcm_vhost vs userspace performance gap we need
>>> virtio-scsi userspace results.  QEMU needs to use the same block
>>> device as the tcm-vhost-iblock benchmark.
>>>
>>> Cong: Is it possible to collect the virtio-scsi userspace results
>>> using the same block device as tcm-vhost-iblock and -drive
>>> format=raw,aio=native,cache=none?
>>>
>>
>> virtio-scsi-raw     43065  69729    52052  67378    1757 29419    2024 28135
>>
>> qemu ....\
>> -drive file=/dev/sdb,format=raw,if=none,id=sdb,cache=none,aio=native \
>> -device virtio-scsi-pci,id=mcbus \
>> -device scsi-disk,drive=sdb
>>
>> there is only one scsi HBA.
>> /dev/sdb is the disk on which all tests have been done.
>>
>> Is this what you want?
>
> Perfect, thanks.  virtio-scsi userspace is much better than virtio-blk
> here.  That's unexpected since they both use the QEMU block layer.  If
> anything, I would have expected virtio-blk to be faster!
>
> I wonder if the request patterns being sent through virtio-blk and
> virtio-scsi are different.  Asias discovered that the guest I/O
> scheduler and request merging makes a big difference between QEMU and
> native KVM tool performance.  It could be the same thing here which
> causes virtio-blk and virtio-scsi userspace to produce quite different
> results.

Yes. Cong, can you try this:

echo noop > /sys/block/$disk/queue/scheduler
echo 2 > /sys/block/$disk/queue/nomerges

This will disable the merge in guest kernel. The host side IO processing 
speed has a very large impact on the guest request pattern, especially 
for sequential read and write.

> The second question is why is tcm_vhost faster than virtio-scsi
> userspace.
>
> Stefan
>


-- 
Asias

      parent reply	other threads:[~2012-06-15  3:27 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-13 10:13 [Qemu-devel] IO performance test on the tcm-vhost scsi mengcong
2012-06-13 10:35 ` Stefan Hajnoczi
2012-06-13 19:08 ` Nicholas A. Bellinger
2012-06-14  9:57   ` Cong Meng
2012-06-14 20:41     ` Nicholas A. Bellinger
2012-06-15 10:35       ` Stefan Hajnoczi
2012-06-14  8:30 ` Stefan Hajnoczi
2012-06-14  9:45   ` Cong Meng
2012-06-14 12:07     ` Stefan Hajnoczi
2012-06-14 12:27       ` Paolo Bonzini
2012-06-14 20:45         ` Nicholas A. Bellinger
2012-06-15  3:28       ` Asias He [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FDAABD4.5060901@redhat.com \
    --to=asias@redhat.com \
    --cc=anthony@codemonkey.ws \
    --cc=linuxram@us.ibm.com \
    --cc=mc@linux.vnet.ibm.com \
    --cc=nab@linux-iscsi.org \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=stefanha@linux.vnet.ibm.com \
    --cc=target-devel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).