From: Stefan Hajnoczi <stefanha@redhat.com>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: Mike Christie <michael.christie@oracle.com>,
fam@euphon.net, jasowang@redhat.com, mst@redhat.com,
pbonzini@redhat.com, qemu-devel@nongnu.org
Subject: Re: [PATCH 2/2] vhost-scsi: Add support for a worker thread per virtqueue
Date: Wed, 15 Nov 2023 07:57:18 -0500 [thread overview]
Message-ID: <20231115125718.GA301867@fedora> (raw)
In-Reply-To: <bucsvfqgs73w73tt4l5z35smccpebjq36hcozpgrkeydm3jumj@zisakm4noecq>
[-- Attachment #1: Type: text/plain, Size: 5792 bytes --]
On Wed, Nov 15, 2023 at 12:43:02PM +0100, Stefano Garzarella wrote:
> On Mon, Nov 13, 2023 at 06:36:44PM -0600, Mike Christie wrote:
> > This adds support for vhost-scsi to be able to create a worker thread
> > per virtqueue. Right now for vhost-net we get a worker thread per
> > tx/rx virtqueue pair which scales nicely as we add more virtqueues and
> > CPUs, but for scsi we get the single worker thread that's shared by all
> > virtqueues. When trying to send IO to more than 2 virtqueues the single
> > thread becomes a bottlneck.
> >
> > This patch adds a new setting, virtqueue_workers, which can be set to:
> >
> > 1: Existing behavior whre we get the single thread.
> > -1: Create a worker per IO virtqueue.
>
> I find this setting a bit odd. What about a boolean instead?
>
> `per_virtqueue_workers`:
> false: Existing behavior whre we get the single thread.
> true: Create a worker per IO virtqueue.
Me too, I thought there would be round-robin assignment for 1 <
worker_cnt < (dev->nvqs - VHOST_SCSI_VQ_NUM_FIXED) but instead only 1
and -1 have any meaning.
Do you want to implement round-robin assignment?
>
> >
> > Signed-off-by: Mike Christie <michael.christie@oracle.com>
> > ---
> > hw/scsi/vhost-scsi.c | 68 +++++++++++++++++++++++++++++++++
> > include/hw/virtio/virtio-scsi.h | 1 +
> > 2 files changed, 69 insertions(+)
> >
> > diff --git a/hw/scsi/vhost-scsi.c b/hw/scsi/vhost-scsi.c
> > index 3126df9e1d9d..5cf669b6563b 100644
> > --- a/hw/scsi/vhost-scsi.c
> > +++ b/hw/scsi/vhost-scsi.c
> > @@ -31,6 +31,9 @@
> > #include "qemu/cutils.h"
> > #include "sysemu/sysemu.h"
> >
> > +#define VHOST_SCSI_WORKER_PER_VQ -1
> > +#define VHOST_SCSI_WORKER_DEF 1
> > +
> > /* Features supported by host kernel. */
> > static const int kernel_feature_bits[] = {
> > VIRTIO_F_NOTIFY_ON_EMPTY,
> > @@ -165,6 +168,62 @@ static const VMStateDescription vmstate_virtio_vhost_scsi = {
> > .pre_save = vhost_scsi_pre_save,
> > };
> >
> > +static int vhost_scsi_set_workers(VHostSCSICommon *vsc, int workers_cnt)
> > +{
> > + struct vhost_dev *dev = &vsc->dev;
> > + struct vhost_vring_worker vq_worker;
> > + struct vhost_worker_state worker;
> > + int i, ret;
> > +
> > + /* Use default worker */
> > + if (workers_cnt == VHOST_SCSI_WORKER_DEF ||
> > + dev->nvqs == VHOST_SCSI_VQ_NUM_FIXED + 1) {
> > + return 0;
> > + }
> > +
> > + if (workers_cnt != VHOST_SCSI_WORKER_PER_VQ) {
> > + return -EINVAL;
> > + }
> > +
> > + /*
> > + * ctl/evt share the first worker since it will be rare for them
> > + * to send cmds while IO is running.
> > + */
> > + for (i = VHOST_SCSI_VQ_NUM_FIXED + 1; i < dev->nvqs; i++) {
> > + memset(&worker, 0, sizeof(worker));
> > +
> > + ret = dev->vhost_ops->vhost_new_worker(dev, &worker);
>
> Should we call vhost_free_worker() in the vhost_scsi_unrealize() or are
> workers automatically freed when `vhostfd` is closed?
>
> The rest LGTM.
>
> Thanks,
> Stefano
>
> > + if (ret == -ENOTTY) {
> > + /*
> > + * worker ioctls are not implemented so just ignore and
> > + * and continue device setup.
> > + */
> > + ret = 0;
> > + break;
> > + } else if (ret) {
> > + break;
> > + }
> > +
> > + memset(&vq_worker, 0, sizeof(vq_worker));
> > + vq_worker.worker_id = worker.worker_id;
> > + vq_worker.index = i;
> > +
> > + ret = dev->vhost_ops->vhost_attach_vring_worker(dev, &vq_worker);
> > + if (ret == -ENOTTY) {
> > + /*
> > + * It's a bug for the kernel to have supported the worker creation
> > + * ioctl but not attach.
> > + */
> > + dev->vhost_ops->vhost_free_worker(dev, &worker);
> > + break;
> > + } else if (ret) {
> > + break;
> > + }
> > + }
> > +
> > + return ret;
> > +}
> > +
> > static void vhost_scsi_realize(DeviceState *dev, Error **errp)
> > {
> > VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(dev);
> > @@ -232,6 +291,13 @@ static void vhost_scsi_realize(DeviceState *dev, Error **errp)
> > goto free_vqs;
> > }
> >
> > + ret = vhost_scsi_set_workers(vsc, vs->conf.virtqueue_workers);
> > + if (ret < 0) {
> > + error_setg(errp, "vhost-scsi: vhost worker setup failed: %s",
> > + strerror(-ret));
> > + goto free_vqs;
> > + }
> > +
> > /* At present, channel and lun both are 0 for bootable vhost-scsi disk */
> > vsc->channel = 0;
> > vsc->lun = 0;
> > @@ -297,6 +363,8 @@ static Property vhost_scsi_properties[] = {
> > VIRTIO_SCSI_F_T10_PI,
> > false),
> > DEFINE_PROP_BOOL("migratable", VHostSCSICommon, migratable, false),
> > + DEFINE_PROP_INT32("virtqueue_workers", VirtIOSCSICommon,
> > + conf.virtqueue_workers, VHOST_SCSI_WORKER_DEF),
> > DEFINE_PROP_END_OF_LIST(),
> > };
> >
> > diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
> > index 779568ab5d28..f70624ece564 100644
> > --- a/include/hw/virtio/virtio-scsi.h
> > +++ b/include/hw/virtio/virtio-scsi.h
> > @@ -51,6 +51,7 @@ typedef struct virtio_scsi_config VirtIOSCSIConfig;
> > struct VirtIOSCSIConf {
> > uint32_t num_queues;
> > uint32_t virtqueue_size;
> > + int virtqueue_workers;
> > bool seg_max_adjust;
> > uint32_t max_sectors;
> > uint32_t cmd_per_lun;
> > --
> > 2.34.1
> >
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2023-11-15 13:00 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-14 0:36 [PATCH 0/2] vhost-scsi: Support worker ioctls Mike Christie
2023-11-14 0:36 ` [PATCH 1/2] vhost: Add worker backend callouts Mike Christie
2023-11-14 0:36 ` [PATCH 2/2] vhost-scsi: Add support for a worker thread per virtqueue Mike Christie
2023-11-15 11:43 ` Stefano Garzarella
2023-11-15 12:57 ` Stefan Hajnoczi [this message]
2023-11-15 16:56 ` Mike Christie
2023-11-15 17:24 ` Stefan Hajnoczi
2023-11-15 16:48 ` Mike Christie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231115125718.GA301867@fedora \
--to=stefanha@redhat.com \
--cc=fam@euphon.net \
--cc=jasowang@redhat.com \
--cc=michael.christie@oracle.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).