virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Venkatesh Srinivas <venkateshs@google.com>
Cc: kvm@vger.kernel.org, linux-scsi@vger.kernel.org, mst@redhat.com,
	linux-kernel@vger.kernel.org, JBottomley@parallels.com,
	virtualization@lists.linux-foundation.org, vsrinivas@ops101.org,
	mikew@google.com
Subject: Re: [PATCH V5 4/5] virtio-scsi: introduce multiqueue support
Date: Wed, 20 Mar 2013 10:53:59 +0100	[thread overview]
Message-ID: <51498737.8000800@redhat.com> (raw)
In-Reply-To: <20130320014657.GA14714@google.com>

Il 20/03/2013 02:46, Venkatesh Srinivas ha scritto:
> This looks pretty good!
> 
> I rather like the (lack of) locking in I/O completion (around the req
> count vs. target/queue binding). It is unfortunate that you need to hold
> the per-target lock in virtscsi_pick_vq() though; have any idea
> how much that lock hurts?

It doesn't hurt, the lock is mostly uncontended.

- if you have lots of I/O, it's held for a very small period of time; if
you have little I/O, it's uncontended anyway.

- the SCSI layer will serialize on the host lock anyway before calling
into the LLD.  Locks are "pipelined" so that in the end the host lock
will be a bigger bottleneck than the others.

Most of the time it only costs 2 extra atomic operations, which should
be galf a microsecond or less.

Paolo

> Just two minor comments:
> 
> (in struct virtio_scsi_target_data):
> +       /* This spinlock never help at the same time as vq_lock. */
>                                ^^^^ held?
> 
> (in struct virtio_scsi):
> +       /* Does the affinity hint is set for virtqueues? */
> Could you rephrase that, please?
> 
> Tested on qemu and w/ Google Compute Engine's virtio-scsi device.
> 
> Reviewed-and-tested-by: Venkatesh Srinivas <venkateshs@google.com>
> 
> Thanks,
> -- vs;

  parent reply	other threads:[~2013-03-20  9:53 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-19  9:57 [PATCH V5 0/5] virtio-scsi multiqueue Wanlong Gao
2013-03-19  9:57 ` [PATCH V5 1/5] virtio-scsi: redo allocation of target data Wanlong Gao
     [not found]   ` <1363692727.2377.53.camel@dabdike.int.hansenpartnership.com>
2013-03-19 11:45     ` Paolo Bonzini
2013-03-19  9:57 ` [PATCH V5 2/5] virtio-scsi: pass struct virtio_scsi to virtqueue completion function Wanlong Gao
2013-03-19  9:57 ` [PATCH V5 3/5] virtio-scsi: push vq lock/unlock into virtscsi_vq_done Wanlong Gao
2013-03-19  9:57 ` [PATCH V5 4/5] virtio-scsi: introduce multiqueue support Wanlong Gao
2013-03-20  1:46   ` Venkatesh Srinivas
2013-03-20  7:24     ` Wanlong Gao
2013-03-20  9:53     ` Paolo Bonzini [this message]
2013-03-19  9:57 ` [PATCH V5 5/5] virtio-scsi: reset virtqueue affinity when doing cpu hotplug Wanlong Gao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51498737.8000800@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=JBottomley@parallels.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=mikew@google.com \
    --cc=mst@redhat.com \
    --cc=venkateshs@google.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=vsrinivas@ops101.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).