From: Cong Meng <mc@linux.vnet.ibm.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: stefanha@linux.vnet.ibm.com, linuxram@us.ibm.com,
qemu-devel@nongnu.org,
"Nicholas A. Bellinger" <nab@linux-iscsi.org>,
target-devel@vger.kernel.org,
Anthony Liguori <anthony@codemonkey.ws>,
Paolo Bonzini <pbonzini@redhat.com>, Asias He <asias@redhat.com>
Subject: Re: [Qemu-devel] IO performance test on the tcm-vhost scsi
Date: Thu, 14 Jun 2012 17:45:22 +0800 [thread overview]
Message-ID: <1339667122.28851.8.camel@mengcong> (raw)
In-Reply-To: <CAJSP0QUDH=L4AkBC+pniJZHpj=H_Phq0YJ+MA2HZfhwgDGsjig@mail.gmail.com>
On Thu, 2012-06-14 at 09:30 +0100, Stefan Hajnoczi wrote:
> On Wed, Jun 13, 2012 at 11:13 AM, mengcong <mc@linux.vnet.ibm.com> wrote:
> > seq-read seq-write rand-read rand-write
> > 8k 256k 8k 256k 8k 256k 8k 256k
> > ----------------------------------------------------------------------------
> > bare-metal 67951 69802 67064 67075 1758 29284 1969 26360
> > tcm-vhost-iblock 61501 66575 51775 67872 1011 22533 1851 28216
> > tcm-vhost-pscsi 66479 68191 50873 67547 1008 22523 1818 28304
> > virtio-blk 26284 66737 23373 65735 1724 28962 1805 27774
> > scsi-disk 36013 60289 46222 62527 1663 12992 1804 27670
>
> >
> > unit: KB/s
> > seq-read/write = sequential read/write
> > rand-read/write = random read/write
> > 8k,256k are blocksize of the IO
>
> What strikes me is how virtio-blk performs significantly worse than
> bare metal and tcm_vhost for seq-read/seq-write 8k. The good
> tcm_vhost results suggest that the overhead is not the virtio
> interface itself, since tcm_vhost implements virtio-scsi.
>
> To drill down on the tcm_vhost vs userspace performance gap we need
> virtio-scsi userspace results. QEMU needs to use the same block
> device as the tcm-vhost-iblock benchmark.
>
> Cong: Is it possible to collect the virtio-scsi userspace results
> using the same block device as tcm-vhost-iblock and -drive
> format=raw,aio=native,cache=none?
>
virtio-scsi-raw 43065 69729 52052 67378 1757 29419 2024 28135
qemu ....\
-drive file=/dev/sdb,format=raw,if=none,id=sdb,cache=none,aio=native \
-device virtio-scsi-pci,id=mcbus \
-device scsi-disk,drive=sdb
there is only one scsi HBA.
/dev/sdb is the disk on which all tests have been done.
Is this what you want?
Cong Meng
> Stefan
>
next prev parent reply other threads:[~2012-06-14 9:45 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-06-13 10:13 [Qemu-devel] IO performance test on the tcm-vhost scsi mengcong
2012-06-13 10:35 ` Stefan Hajnoczi
2012-06-13 19:08 ` Nicholas A. Bellinger
2012-06-14 9:57 ` Cong Meng
2012-06-14 20:41 ` Nicholas A. Bellinger
2012-06-15 10:35 ` Stefan Hajnoczi
2012-06-14 8:30 ` Stefan Hajnoczi
2012-06-14 9:45 ` Cong Meng [this message]
2012-06-14 12:07 ` Stefan Hajnoczi
2012-06-14 12:27 ` Paolo Bonzini
2012-06-14 20:45 ` Nicholas A. Bellinger
2012-06-15 3:28 ` Asias He
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1339667122.28851.8.camel@mengcong \
--to=mc@linux.vnet.ibm.com \
--cc=anthony@codemonkey.ws \
--cc=asias@redhat.com \
--cc=linuxram@us.ibm.com \
--cc=nab@linux-iscsi.org \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=stefanha@linux.vnet.ibm.com \
--cc=target-devel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).