linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	kvm@vger.kernel.org
Subject: Re: [PATCH 0/5] Multiqueue virtio-scsi
Date: Thu, 30 Aug 2012 17:53:52 +0300	[thread overview]
Message-ID: <20120830145352.GA21724@redhat.com> (raw)
In-Reply-To: <1346154857-12487-1-git-send-email-pbonzini@redhat.com>

On Tue, Aug 28, 2012 at 01:54:12PM +0200, Paolo Bonzini wrote:
> Hi all,
> 
> this series adds multiqueue support to the virtio-scsi driver, based
> on Jason Wang's work on virtio-net.  It uses a simple queue steering
> algorithm that expects one queue per CPU.  LUNs in the same target always
> use the same queue (so that commands are not reordered); queue switching
> occurs when the request being queued is the only one for the target.
> Also based on Jason's patches, the virtqueue affinity is set so that
> each CPU is associated to one virtqueue.

Is there a spec patch? I did not see one.

> I tested the patches with fio, using up to 32 virtio-scsi disks backed
> by tmpfs on the host, and 1 LUN per target.
> 
> FIO configuration
> -----------------
> [global]
> rw=read
> bsrange=4k-64k
> ioengine=libaio
> direct=1
> iodepth=4
> loops=20
> 
> overall bandwidth (MB/s)
> -----------------
> 
> # of targets    single-queue    multi-queue, 4 VCPUs    multi-queue, 8 VCPUs
> 1                  540               626                     599
> 2                  795               965                     925
> 4                  997              1376                    1500
> 8                 1136              2130                    2060
> 16                1440              2269                    2474
> 24                1408              2179                    2436
> 32                1515              1978                    2319
> 
> (These numbers for single-queue are with 4 VCPUs, but the impact of adding
> more VCPUs is very limited).
> 
> avg bandwidth per LUN (MB/s)
> ---------------------
> 
> # of targets    single-queue    multi-queue, 4 VCPUs    multi-queue, 8 VCPUs
> 1                  540               626                     599
> 2                  397               482                     462
> 4                  249               344                     375
> 8                  142               266                     257
> 16                  90               141                     154
> 24                  58                90                     101
> 32                  47                61                      72
> 
> Testing this may require an irqbalance daemon that is built from git,
> due to http://code.google.com/p/irqbalance/issues/detail?id=37.
> Alternatively you can just set the affinity manually in /proc.
> 
> Rusty, can you please give your Acked-by to the first two patches?
> 
> Jason Wang (2):
>   virtio-ring: move queue_index to vring_virtqueue
>   virtio: introduce an API to set affinity for a virtqueue
> 
> Paolo Bonzini (3):
>   virtio-scsi: allocate target pointers in a separate memory block
>   virtio-scsi: pass struct virtio_scsi to virtqueue completion function
>   virtio-scsi: introduce multiqueue support
> 
>  drivers/lguest/lguest_device.c         |    1 +
>  drivers/remoteproc/remoteproc_virtio.c |    1 +
>  drivers/s390/kvm/kvm_virtio.c          |    1 +
>  drivers/scsi/virtio_scsi.c             |  200 ++++++++++++++++++++++++--------
>  drivers/virtio/virtio_mmio.c           |   11 +-
>  drivers/virtio/virtio_pci.c            |   58 ++++++++-
>  drivers/virtio/virtio_ring.c           |   17 +++
>  include/linux/virtio.h                 |    4 +
>  include/linux/virtio_config.h          |   21 ++++
>  9 files changed, 253 insertions(+), 61 deletions(-)
> 
> _______________________________________________
> Virtualization mailing list
> Virtualization@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2012-08-30 14:53 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-28 11:54 [PATCH 0/5] Multiqueue virtio-scsi Paolo Bonzini
2012-08-28 11:54 ` [PATCH 1/5] virtio-ring: move queue_index to vring_virtqueue Paolo Bonzini
2012-08-29  7:54   ` Jason Wang
2012-09-05 23:32   ` Rusty Russell
2012-08-28 11:54 ` [PATCH 2/5] virtio: introduce an API to set affinity for a virtqueue Paolo Bonzini
2012-09-05 23:32   ` Rusty Russell
2012-08-28 11:54 ` [PATCH 3/5] virtio-scsi: allocate target pointers in a separate memory block Paolo Bonzini
2012-08-28 14:07   ` Sasha Levin
2012-08-28 14:25     ` Paolo Bonzini
2012-08-28 11:54 ` [PATCH 4/5] virtio-scsi: pass struct virtio_scsi to virtqueue completion function Paolo Bonzini
2012-08-28 11:54 ` [PATCH 5/5] virtio-scsi: introduce multiqueue support Paolo Bonzini
2012-09-04  2:21   ` Nicholas A. Bellinger
2012-09-04  6:46     ` Paolo Bonzini
2012-09-04  8:46       ` Michael S. Tsirkin
2012-09-04 10:25         ` Paolo Bonzini
2012-09-04 11:09           ` Michael S. Tsirkin
2012-09-04 11:18             ` Paolo Bonzini
2012-09-04 13:35               ` Michael S. Tsirkin
2012-09-04 13:45                 ` Paolo Bonzini
2012-09-04 14:19                   ` Michael S. Tsirkin
2012-09-04 14:25                     ` Paolo Bonzini
2012-09-04 20:11       ` Nicholas A. Bellinger
2012-09-05  7:03         ` Paolo Bonzini
2012-09-04 12:48   ` Michael S. Tsirkin
2012-09-04 13:49     ` Paolo Bonzini
2012-09-04 14:21       ` Michael S. Tsirkin
2012-09-04 14:30         ` Paolo Bonzini
2012-09-04 14:41           ` Michael S. Tsirkin
2012-09-04 14:47   ` Michael S. Tsirkin
2012-09-04 14:55     ` Paolo Bonzini
2012-09-04 15:03       ` Michael S. Tsirkin
2012-08-30  7:13 ` [PATCH 0/5] Multiqueue virtio-scsi Stefan Hajnoczi
2012-08-30 14:53 ` Michael S. Tsirkin [this message]
2012-08-30 15:45   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120830145352.GA21724@redhat.com \
    --to=mst@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).