qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Cong Meng <mc@linux.vnet.ibm.com>
To: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
Cc: Jens Axboe <axboe@kernel.dk>,
	stefanha@linux.vnet.ibm.com,
	linux-scsi <linux-scsi@vger.kernel.org>,
	linuxram@us.ibm.com, qemu-devel@nongnu.org,
	target-devel@vger.kernel.org,
	Anthony Liguori <anthony@codemonkey.ws>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] IO performance test on the tcm-vhost scsi
Date: Thu, 14 Jun 2012 17:57:34 +0800	[thread overview]
Message-ID: <1339667854.28851.14.camel@mengcong> (raw)
In-Reply-To: <1339614481.13709.354.camel@haakon2.linux-iscsi.org>

On Wed, 2012-06-13 at 12:08 -0700, Nicholas A. Bellinger wrote:
> On Wed, 2012-06-13 at 18:13 +0800, mengcong wrote:
> > Hi folks, I did an IO performance test on the tcm-vhost scsi. I want to share 
> > the test result data here.
> > 
> > 
> >                     seq-read        seq-write       rand-read     rand-write
> >                     8k     256k     8k     256k     8k   256k     8k   256k
> > ----------------------------------------------------------------------------
> > bare-metal          67951  69802    67064  67075    1758 29284    1969 26360
> > tcm-vhost-iblock    61501  66575    51775  67872    1011 22533    1851 28216
> > tcm-vhost-pscsi     66479  68191    50873  67547    1008 22523    1818 28304
> > virtio-blk          26284  66737    23373  65735    1724 28962    1805 27774
> > scsi-disk           36013  60289    46222  62527    1663 12992    1804 27670
> > 
> > unit: KB/s
> > seq-read/write = sequential read/write
> > rand-read/write = random read/write
> > 8k,256k are blocksize of the IO
> > 
> > In tcm-vhost-iblock test, the emulate_write_cache attr was enabled.
> > In virtio-blk test, cache=none,aio=native were set.
> > In scsi-disk test, cache=none,aio=native were set, and LSI HBA was used.
> > 
> > I also tried to do the test with a scsi-generic LUN (pass through the 
> > physical partition /dev/sgX device). But I couldn't setup it
> > successfully. It's a pity.
> > 
> > Benchmark tool: fio, with ioengine=aio,direct=1,iodepth=8 set for all tests.
> > kvm vm: 2 cpus and 2G ram
> > 
> 
> These initial performance results look quite promising for virtio-scsi.
> 
> I'd be really interested to see how a raw flash block device backend
> that locally can do ~100K 4k mixed R/W random IOPs compares with
> virtio-scsi guest performance as the random small block fio workload
> increases..
flash block == Solid state disk? I have no one on hand. 
> 
> Also note there is a bottleneck wrt to random small block I/O
> performance (per LUN) on the Linux/SCSI initiator side that is effecting
> things here.  We've run into this limitation numerous times with using
> SCSI LLDs as backend TCM devices, and I usually recommend using iblock
> export with raw block flash backends for achieving the best small block
> random I/O performance results.  A number of high performance flash
> storage folks do something similar with raw block access (Jen's CC'ed)
> 
> As per Stefan's earlier question, how does virtio-scsi to QEMU SCSI
> userspace compare with these results..?  Is there a reason why these
> where not included in the initial results..?
> 
This should be a mistake I made. I will do this pattern later.

> Thanks Meng!
> 
> --nab
> 

  reply	other threads:[~2012-06-14  9:58 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-13 10:13 [Qemu-devel] IO performance test on the tcm-vhost scsi mengcong
2012-06-13 10:35 ` Stefan Hajnoczi
2012-06-13 19:08 ` Nicholas A. Bellinger
2012-06-14  9:57   ` Cong Meng [this message]
2012-06-14 20:41     ` Nicholas A. Bellinger
2012-06-15 10:35       ` Stefan Hajnoczi
2012-06-14  8:30 ` Stefan Hajnoczi
2012-06-14  9:45   ` Cong Meng
2012-06-14 12:07     ` Stefan Hajnoczi
2012-06-14 12:27       ` Paolo Bonzini
2012-06-14 20:45         ` Nicholas A. Bellinger
2012-06-15  3:28       ` Asias He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1339667854.28851.14.camel@mengcong \
    --to=mc@linux.vnet.ibm.com \
    --cc=anthony@codemonkey.ws \
    --cc=axboe@kernel.dk \
    --cc=linux-scsi@vger.kernel.org \
    --cc=linuxram@us.ibm.com \
    --cc=nab@linux-iscsi.org \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@linux.vnet.ibm.com \
    --cc=target-devel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).