* Re: [PATCH 1/1] qemu vhost scsi: add VHOST_SET_VRING_ENABLE support
[not found] ` <1605223150-10888-2-git-send-email-michael.christie@oracle.com>
@ 2020-11-17 11:53 ` Stefan Hajnoczi
2020-12-02 9:59 ` Michael S. Tsirkin
1 sibling, 0 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-17 11:53 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
[-- Attachment #1.1: Type: text/plain, Size: 854 bytes --]
On Thu, Nov 12, 2020 at 05:19:00PM -0600, Mike Christie wrote:
> +static int vhost_kernel_set_vring_enable(struct vhost_dev *dev, int enable)
> +{
> + struct vhost_vring_state s;
> + int i, ret;
> +
> + s.num = 1;
> + for (i = 0; i < dev->nvqs; ++i) {
> + s.index = i;
> +
> + ret = vhost_kernel_call(dev, VHOST_SET_VRING_ENABLE, &s);
> + /* Ignore kernels that do not support the cmd */
> + if (ret == -EPERM)
> + return 0;
> + if (ret)
> + goto disable_vrings;
> + }
The 'enable' argument is ignored and this function acts on all
virtqueues, while the ioctl acts on a single virtqueue only.
This function's behavior is actually "vhost_kernel_enable_vrings()"
(plural), not "vhost_kernel_set_vring_enable()" (singular).
Please rename this function and drop the enable argument.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 01/10] vhost: remove work arg from vhost_work_flush
[not found] ` <1605223150-10888-3-git-send-email-michael.christie@oracle.com>
@ 2020-11-17 13:04 ` Stefan Hajnoczi
0 siblings, 0 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-17 13:04 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
[-- Attachment #1.1: Type: text/plain, Size: 654 bytes --]
On Thu, Nov 12, 2020 at 05:19:01PM -0600, Mike Christie wrote:
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index f22fce5..8795fd3 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -1468,8 +1468,8 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
> /* Flush both the vhost poll and vhost work */
> for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
> vhost_scsi_flush_vq(vs, i);
> - vhost_work_flush(&vs->dev, &vs->vs_completion_work);
> - vhost_work_flush(&vs->dev, &vs->vs_event_work);
> + vhost_work_dev_flush(&vs->dev);
> + vhost_work_dev_flush(&vs->dev);
These two calls can be combined into a single call now.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 02/10] vhost scsi: remove extra flushes
[not found] ` <1605223150-10888-4-git-send-email-michael.christie@oracle.com>
@ 2020-11-17 13:07 ` Stefan Hajnoczi
0 siblings, 0 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-17 13:07 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
[-- Attachment #1.1: Type: text/plain, Size: 642 bytes --]
On Thu, Nov 12, 2020 at 05:19:02PM -0600, Mike Christie wrote:
> The vhost work flush function was flushing the entire work queue, so
> there is no need for the double vhost_work_dev_flush calls in
> vhost_scsi_flush.
>
> And we do not need to call vhost_poll_flush for each poller because
> that call also ends up flushing the same work queue thread the
> vhost_work_dev_flush call flushed.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
> drivers/vhost/scsi.c | 8 --------
> 1 file changed, 8 deletions(-)
Ah, this was done as a separate step:
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 03/10] vhost poll: fix coding style
[not found] ` <1605223150-10888-5-git-send-email-michael.christie@oracle.com>
@ 2020-11-17 13:07 ` Stefan Hajnoczi
0 siblings, 0 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-17 13:07 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
[-- Attachment #1.1: Type: text/plain, Size: 419 bytes --]
On Thu, Nov 12, 2020 at 05:19:03PM -0600, Mike Christie wrote:
> We use like 3 coding styles in this struct. Switch to just tabs.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> ---
> drivers/vhost/vhost.h | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 05/10] vhost: poll support support multiple workers
[not found] ` <1605223150-10888-7-git-send-email-michael.christie@oracle.com>
@ 2020-11-17 15:32 ` Stefan Hajnoczi
0 siblings, 0 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-17 15:32 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
[-- Attachment #1.1: Type: text/plain, Size: 1071 bytes --]
On Thu, Nov 12, 2020 at 05:19:05PM -0600, Mike Christie wrote:
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index d229515..9eeb8c7 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -187,13 +187,15 @@ void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn)
>
> /* Init poll structure */
> void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> - __poll_t mask, struct vhost_dev *dev)
> + __poll_t mask, struct vhost_dev *dev,
> + struct vhost_virtqueue *vq)
> {
> init_waitqueue_func_entry(&poll->wait, vhost_poll_wakeup);
> init_poll_funcptr(&poll->table, vhost_poll_func);
> poll->mask = mask;
> poll->dev = dev;
> poll->wqh = NULL;
> + poll->vq = vq;
>
> vhost_work_init(&poll->work, fn);
> }
Tying the poll mechanism to vqs rather than directly to vhost_worker
seems okay for now, but it might be necessary to change this later if
drivers want more flexibility about poll something that's not tied to a
vq or that uses worker 0.
Stefan
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 06/10] vhost scsi: make SCSI cmd completion per vq
[not found] ` <1605223150-10888-8-git-send-email-michael.christie@oracle.com>
@ 2020-11-17 16:04 ` Stefan Hajnoczi
0 siblings, 0 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-17 16:04 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
[-- Attachment #1.1: Type: text/plain, Size: 565 bytes --]
On Thu, Nov 12, 2020 at 05:19:06PM -0600, Mike Christie wrote:
> In the last patches we are going to have a worker thread per IO vq.
> This patch separates the scsi cmd completion code paths so we can
> complete cmds based on their vq instead of having all cmds complete
> on the same worker thread.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
> drivers/vhost/scsi.c | 48 +++++++++++++++++++++++++-----------------------
> 1 file changed, 25 insertions(+), 23 deletions(-)
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 07/10] vhost, vhost-scsi: flush IO vqs then send TMF rsp
[not found] ` <1605223150-10888-9-git-send-email-michael.christie@oracle.com>
@ 2020-11-17 16:05 ` Stefan Hajnoczi
0 siblings, 0 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-17 16:05 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
[-- Attachment #1.1: Type: text/plain, Size: 829 bytes --]
On Thu, Nov 12, 2020 at 05:19:07PM -0600, Mike Christie wrote:
> With one worker we will always send the scsi cmd responses then send the
> TMF rsp, because LIO will always complete the scsi cmds first which
> calls vhost_scsi_release_cmd to add them to the work queue.
>
> When the next patch adds multiple worker support, the worker threads
> could still be sending their responses when the tmf's work is run.
> So this patch has vhost-scsi flush the IO vqs on other worker threads
> before we send the tmf response.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
> drivers/vhost/scsi.c | 16 ++++++++++++++--
> drivers/vhost/vhost.c | 6 ++++++
> drivers/vhost/vhost.h | 1 +
> 3 files changed, 21 insertions(+), 2 deletions(-)
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 08/10] vhost: move msg_handler to new ops struct
[not found] ` <1605223150-10888-10-git-send-email-michael.christie@oracle.com>
@ 2020-11-17 16:08 ` Stefan Hajnoczi
0 siblings, 0 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-17 16:08 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
[-- Attachment #1.1: Type: text/plain, Size: 607 bytes --]
On Thu, Nov 12, 2020 at 05:19:08PM -0600, Mike Christie wrote:
> The next patch adds a callout so drivers can perform some action when we
> get a VHOST_SET_VRING_ENABLE, so this patch moves the msg_handler callout
> to a new vhost_dev_ops struct just to keep all the callouts better
> organized.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
> drivers/vhost/vdpa.c | 7 +++++--
> drivers/vhost/vhost.c | 10 ++++------
> drivers/vhost/vhost.h | 11 ++++++-----
> 3 files changed, 15 insertions(+), 13 deletions(-)
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 09/10] vhost: add VHOST_SET_VRING_ENABLE support
[not found] ` <1605223150-10888-11-git-send-email-michael.christie@oracle.com>
@ 2020-11-17 16:14 ` Stefan Hajnoczi
0 siblings, 0 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-17 16:14 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
[-- Attachment #1.1: Type: text/plain, Size: 1587 bytes --]
On Thu, Nov 12, 2020 at 05:19:09PM -0600, Mike Christie wrote:
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 2f98b81..e953031 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -1736,6 +1736,28 @@ static long vhost_vring_set_num_addr(struct vhost_dev *d,
>
> return r;
> }
> +
> +static long vhost_vring_set_enable(struct vhost_dev *d,
> + struct vhost_virtqueue *vq,
> + void __user *argp)
> +{
> + struct vhost_vring_state s;
> + int ret = 0;
> +
> + if (vq->private_data)
> + return -EBUSY;
> +
> + if (copy_from_user(&s, argp, sizeof s))
> + return -EFAULT;
> +
> + if (s.num != 1 && s.num != 0)
> + return -EINVAL;
> +
> + if (d->ops && d->ops->enable_vring)
> + ret = d->ops->enable_vring(vq, s.num);
> + return ret;
> +}
Silently ignoring this ioctl on drivers that don't implement
d->ops->enable_vring() could be a problem. Userspace expects to be able
to enable/disable vqs, we can't just return 0 because the vq won't be
enabled/disabled as requested.
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index a293f48..1279c09 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -158,6 +158,7 @@ struct vhost_msg_node {
>
> struct vhost_dev_ops {
> int (*msg_handler)(struct vhost_dev *dev, struct vhost_iotlb_msg *msg);
> + int (*enable_vring)(struct vhost_virtqueue *vq, bool enable);
Please add doc comments explaining what this callback needs to do and
the environment in which it is executed (locks that are held, etc).
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
[not found] <1605223150-10888-1-git-send-email-michael.christie@oracle.com>
` (7 preceding siblings ...)
[not found] ` <1605223150-10888-11-git-send-email-michael.christie@oracle.com>
@ 2020-11-17 16:40 ` Stefan Hajnoczi
2020-11-18 5:17 ` Jason Wang
[not found] ` <b3343762-bb11-b750-46ec-43b5556f2b8e@oracle.com>
[not found] ` <1605223150-10888-2-git-send-email-michael.christie@oracle.com>
9 siblings, 2 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-17 16:40 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
[-- Attachment #1.1: Type: text/plain, Size: 2458 bytes --]
On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
> The following kernel patches were made over Michael's vhost branch:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost
>
> and the vhost-scsi bug fix patchset:
>
> https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t
>
> And the qemu patch was made over the qemu master branch.
>
> vhost-scsi currently supports multiple queues with the num_queues
> setting, but we end up with a setup where the guest's scsi/block
> layer can do a queue per vCPU and the layers below vhost can do
> a queue per CPU. vhost-scsi will then do a num_queue virtqueues,
> but all IO gets set on and completed on a single vhost-scsi thread.
> After 2 - 4 vqs this becomes a bottleneck.
>
> This patchset allows us to create a worker thread per IO vq, so we
> can better utilize multiple CPUs with the multiple queues. It
> implments Jason's suggestion to create the initial worker like
> normal, then create the extra workers for IO vqs with the
> VHOST_SET_VRING_ENABLE ioctl command added in this patchset.
How does userspace find out the tids and set their CPU affinity?
What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't
really "enable" or "disable" the vq, requests are processed regardless.
The purpose of the ioctl isn't clear to me because the kernel could
automatically create 1 thread per vq without a new ioctl. On the other
hand, if userspace is supposed to control worker threads then a
different interface would be more powerful:
struct vhost_vq_worker_info {
/*
* The pid of an existing vhost worker that this vq will be
* assigned to. When pid is 0 the virtqueue is assigned to the
* default vhost worker. When pid is -1 a new worker thread is
* created for this virtqueue. When pid is -2 the virtqueue's
* worker thread is unchanged.
*
* If a vhost worker no longer has any virtqueues assigned to it
* then it will terminate.
*
* The pid of the vhost worker is stored to this field when the
* ioctl completes successfully. Use pid -2 to query the current
* vhost worker pid.
*/
__kernel_pid_t pid; /* in/out */
/* The virtqueue index*/
unsigned int vq_idx; /* in */
};
ioctl(vhost_fd, VHOST_SET_VQ_WORKER, &info);
Stefan
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
2020-11-17 16:40 ` [PATCH 00/10] vhost/qemu: thread per IO SCSI vq Stefan Hajnoczi
@ 2020-11-18 5:17 ` Jason Wang
[not found] ` <8318de9f-c585-39f8-d931-1ff5e0341d75@oracle.com>
[not found] ` <b3343762-bb11-b750-46ec-43b5556f2b8e@oracle.com>
1 sibling, 1 reply; 27+ messages in thread
From: Jason Wang @ 2020-11-18 5:17 UTC (permalink / raw)
To: Stefan Hajnoczi, Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
On 2020/11/18 上午12:40, Stefan Hajnoczi wrote:
> On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
>> The following kernel patches were made over Michael's vhost branch:
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost
>>
>> and the vhost-scsi bug fix patchset:
>>
>> https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t
>>
>> And the qemu patch was made over the qemu master branch.
>>
>> vhost-scsi currently supports multiple queues with the num_queues
>> setting, but we end up with a setup where the guest's scsi/block
>> layer can do a queue per vCPU and the layers below vhost can do
>> a queue per CPU. vhost-scsi will then do a num_queue virtqueues,
>> but all IO gets set on and completed on a single vhost-scsi thread.
>> After 2 - 4 vqs this becomes a bottleneck.
>>
>> This patchset allows us to create a worker thread per IO vq, so we
>> can better utilize multiple CPUs with the multiple queues. It
>> implments Jason's suggestion to create the initial worker like
>> normal, then create the extra workers for IO vqs with the
>> VHOST_SET_VRING_ENABLE ioctl command added in this patchset.
> How does userspace find out the tids and set their CPU affinity?
>
> What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't
> really "enable" or "disable" the vq, requests are processed regardless.
Actually I think it should do the real "enable/disable" that tries to
follow the virtio spec.
(E.g both PCI and MMIO have something similar).
>
> The purpose of the ioctl isn't clear to me because the kernel could
> automatically create 1 thread per vq without a new ioctl.
It's not necessarily to create or destroy kthread according to
VRING_ENABLE but could be a hint.
> On the other
> hand, if userspace is supposed to control worker threads then a
> different interface would be more powerful:
>
> struct vhost_vq_worker_info {
> /*
> * The pid of an existing vhost worker that this vq will be
> * assigned to. When pid is 0 the virtqueue is assigned to the
> * default vhost worker. When pid is -1 a new worker thread is
> * created for this virtqueue. When pid is -2 the virtqueue's
> * worker thread is unchanged.
> *
> * If a vhost worker no longer has any virtqueues assigned to it
> * then it will terminate.
> *
> * The pid of the vhost worker is stored to this field when the
> * ioctl completes successfully. Use pid -2 to query the current
> * vhost worker pid.
> */
> __kernel_pid_t pid; /* in/out */
>
> /* The virtqueue index*/
> unsigned int vq_idx; /* in */
> };
>
> ioctl(vhost_fd, VHOST_SET_VQ_WORKER, &info);
This seems to leave the question to userspace which I'm not sure it's
good since it tries to introduce another scheduling layer.
Per vq worker seems be good enough to start with.
Thanks
>
> Stefan
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
[not found] ` <8318de9f-c585-39f8-d931-1ff5e0341d75@oracle.com>
@ 2020-11-18 7:54 ` Jason Wang
[not found] ` <d6ffcf17-ab12-4830-cc3c-0f0402fb8a0f@oracle.com>
0 siblings, 1 reply; 27+ messages in thread
From: Jason Wang @ 2020-11-18 7:54 UTC (permalink / raw)
To: Mike Christie, Stefan Hajnoczi
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
On 2020/11/18 下午2:57, Mike Christie wrote:
> On 11/17/20 11:17 PM, Jason Wang wrote:
>> On 2020/11/18 上午12:40, Stefan Hajnoczi wrote:
>>> On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
>>>> The following kernel patches were made over Michael's vhost branch:
>>>>
>>>> https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost__;!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NoRys_hU$
>>>> and the vhost-scsi bug fix patchset:
>>>>
>>>> https://urldefense.com/v3/__https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/*t__;Iw!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NmuPE_m8$
>>>> And the qemu patch was made over the qemu master branch.
>>>>
>>>> vhost-scsi currently supports multiple queues with the num_queues
>>>> setting, but we end up with a setup where the guest's scsi/block
>>>> layer can do a queue per vCPU and the layers below vhost can do
>>>> a queue per CPU. vhost-scsi will then do a num_queue virtqueues,
>>>> but all IO gets set on and completed on a single vhost-scsi thread.
>>>> After 2 - 4 vqs this becomes a bottleneck.
>>>>
>>>> This patchset allows us to create a worker thread per IO vq, so we
>>>> can better utilize multiple CPUs with the multiple queues. It
>>>> implments Jason's suggestion to create the initial worker like
>>>> normal, then create the extra workers for IO vqs with the
>>>> VHOST_SET_VRING_ENABLE ioctl command added in this patchset.
>>> How does userspace find out the tids and set their CPU affinity?
>>>
>>> What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't
>>> really "enable" or "disable" the vq, requests are processed regardless.
>>
>> Actually I think it should do the real "enable/disable" that tries to follow the virtio spec.
>>
> What does real mean here?
I think it means when a vq is disabled, vhost won't process any request
from that virtqueue.
> For the vdpa enable call for example, would it be like
> ifcvf_vdpa_set_vq_ready where it sets the ready bit or more like mlx5_vdpa_set_vq_ready
> where it can do some more work in the disable case?
For vDPA, it would be more complicated.
E.g for IFCVF, it just delay the setting of queue_enable when it get
DRIVER_OK. Technically it can passthrough the queue_enable to the
hardware as what mlx5e did.
>
> For net and something like ifcvf_vdpa_set_vq_ready's design would we have
> vhost_ring_ioctl() set some vhost_virtqueue enable bit. We then have some helper
> vhost_vq_is_enabled() and some code to detect if userspace supports the new ioctl.
Yes, vhost support backend capability. When userspace negotiate the new
capability, we should depend on SET_VRING_ENABLE, if not we can do
vhost_vq_is_enable().
> And then in vhost_net_set_backend do we call vhost_vq_is_enabled()? What is done
> for disable then?
It needs more thought, but the question is not specific to
SET_VRING_ENABLE. Consider guest may zero ring address as well.
For disabling, we can simply flush the work and disable all the polls.
> It doesn't seem to buy a lot of new functionality. Is it just
> so we follow the spec?
My understanding is that, since spec defines queue_enable, we should
support it in vhost. And we can piggyback the delayed vq creation with
this feature. Otherwise we will duplicate the function if we want to
support queue_enable.
>
> Or do you want it work more like mlx5_vdpa_set_vq_ready? For this in vhost_ring_ioctl
> when we get the new ioctl we would call into the drivers and have it start queues
> and stop queues? For enable, what we you do for net for this case?
Net is something different, we can simply use SET_BACKEND to disable a
specific virtqueue without introducing new ioctls. Notice that, net mq
is kind of different with scsi which have a per queue pair vhost device,
and the API allows us to set backend for a specific virtqueue.
> For disable,
> would you do something like vhost_net_stop_vq (we don't free up anything allocated
> in vhost_vring_ioctl calls, but we can stop what we setup in the net driver)?
It's up to you, if you think you should free the resources you can do that.
> Is this useful for the current net mq design or is this for something like where
> you would do one vhost net device with multiple vqs?
I think SET_VRING_ENABLE is more useful for SCSI since it have a model
of multiple vqs per vhost device.
>
> My issue/convern is that in general these calls seems useful, but we don't really
> need them for scsi because vhost scsi is already stuck creating vqs like how it does
> due to existing users. If we do the ifcvf_vdpa_set_vq_ready type of design where
> we just set some bit, then the new ioctl does not give us a lot. It's just an extra
> check and extra code.
>
> And for the mlx5_vdpa_set_vq_ready type of design, it doesn't seem like it's going
> to happen a lot where the admin is going to want to remove vqs from a running device.
In this case, qemu may just disable the queues of vhost-scsi via
SET_VRING_ENABLE and then we can free resources?
> And for both addition/removal for scsi we would need code in virtio scsi to handle
> hot plug removal/addition of a queue and then redoing the multiqueue mappings which
> would be difficult to add with no one requesting it.
Thanks
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
[not found] ` <b3343762-bb11-b750-46ec-43b5556f2b8e@oracle.com>
@ 2020-11-18 9:54 ` Michael S. Tsirkin
2020-11-19 14:00 ` Stefan Hajnoczi
2020-11-18 11:31 ` Stefan Hajnoczi
1 sibling, 1 reply; 27+ messages in thread
From: Michael S. Tsirkin @ 2020-11-18 9:54 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, qemu-devel, virtualization, target-devel,
Stefan Hajnoczi, pbonzini
On Tue, Nov 17, 2020 at 01:13:14PM -0600, Mike Christie wrote:
> On 11/17/20 10:40 AM, Stefan Hajnoczi wrote:
> > On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
> >> The following kernel patches were made over Michael's vhost branch:
> >>
> >> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost
> >>
> >> and the vhost-scsi bug fix patchset:
> >>
> >> https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t
> >>
> >> And the qemu patch was made over the qemu master branch.
> >>
> >> vhost-scsi currently supports multiple queues with the num_queues
> >> setting, but we end up with a setup where the guest's scsi/block
> >> layer can do a queue per vCPU and the layers below vhost can do
> >> a queue per CPU. vhost-scsi will then do a num_queue virtqueues,
> >> but all IO gets set on and completed on a single vhost-scsi thread.
> >> After 2 - 4 vqs this becomes a bottleneck.
> >>
> >> This patchset allows us to create a worker thread per IO vq, so we
> >> can better utilize multiple CPUs with the multiple queues. It
> >> implments Jason's suggestion to create the initial worker like
> >> normal, then create the extra workers for IO vqs with the
> >> VHOST_SET_VRING_ENABLE ioctl command added in this patchset.
> >
> > How does userspace find out the tids and set their CPU affinity?
> >
>
> When we create the worker thread we add it to the device owner's cgroup,
> so we end up inheriting those settings like affinity.
>
> However, are you more asking about finer control like if the guest is
> doing mq, and the mq hw queue is bound to cpu0, it would perform
> better if we could bind vhost vq's worker thread to cpu0? I think the
> problem might is if you are in the cgroup then we can't set a specific
> threads CPU affinity to just one specific CPU. So you can either do
> cgroups or not.
Something we wanted to try for a while is to allow userspace
to create threads for us, then specify which vqs it processes.
That would address this set of concerns ...
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
[not found] ` <b3343762-bb11-b750-46ec-43b5556f2b8e@oracle.com>
2020-11-18 9:54 ` Michael S. Tsirkin
@ 2020-11-18 11:31 ` Stefan Hajnoczi
2020-11-19 14:46 ` Michael S. Tsirkin
1 sibling, 1 reply; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-18 11:31 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
[-- Attachment #1.1: Type: text/plain, Size: 3848 bytes --]
On Tue, Nov 17, 2020 at 01:13:14PM -0600, Mike Christie wrote:
> On 11/17/20 10:40 AM, Stefan Hajnoczi wrote:
> > On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
> >> The following kernel patches were made over Michael's vhost branch:
> >>
> >> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost
> >>
> >> and the vhost-scsi bug fix patchset:
> >>
> >> https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t
> >>
> >> And the qemu patch was made over the qemu master branch.
> >>
> >> vhost-scsi currently supports multiple queues with the num_queues
> >> setting, but we end up with a setup where the guest's scsi/block
> >> layer can do a queue per vCPU and the layers below vhost can do
> >> a queue per CPU. vhost-scsi will then do a num_queue virtqueues,
> >> but all IO gets set on and completed on a single vhost-scsi thread.
> >> After 2 - 4 vqs this becomes a bottleneck.
> >>
> >> This patchset allows us to create a worker thread per IO vq, so we
> >> can better utilize multiple CPUs with the multiple queues. It
> >> implments Jason's suggestion to create the initial worker like
> >> normal, then create the extra workers for IO vqs with the
> >> VHOST_SET_VRING_ENABLE ioctl command added in this patchset.
> >
> > How does userspace find out the tids and set their CPU affinity?
> >
>
> When we create the worker thread we add it to the device owner's cgroup,
> so we end up inheriting those settings like affinity.
>
> However, are you more asking about finer control like if the guest is
> doing mq, and the mq hw queue is bound to cpu0, it would perform
> better if we could bind vhost vq's worker thread to cpu0? I think the
> problem might is if you are in the cgroup then we can't set a specific
> threads CPU affinity to just one specific CPU. So you can either do
> cgroups or not.
>
>
> > What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't
> > really "enable" or "disable" the vq, requests are processed regardless.
> >
>
> Yeah, I agree. The problem I've mentioned before is:
>
> 1. For net and vsock, it's not useful because the vqs are hard coded in
> the kernel and userspace, so you can't disable a vq and you never need
> to enable one.
>
> 2. vdpa has it's own enable ioctl.
>
> 3. For scsi, because we already are doing multiple vqs based on the
> num_queues value, we have to have some sort of compat support and
> code to detect if userspace is even going to send the new ioctl.
> In this patchset, compat just meant enable/disable the extra functionality
> of extra worker threads for a vq. We will still use the vq if
> userspace set it up.
>
>
> > The purpose of the ioctl isn't clear to me because the kernel could
> > automatically create 1 thread per vq without a new ioctl. On the other
> > hand, if userspace is supposed to control worker threads then a
> > different interface would be more powerful:
> >
The main request I have is to clearly define the meaning of the
VHOST_SET_VRING_ENABLE ioctl. If you want to keep it as-is for now and
the vhost maintainers are happy with then, that's okay. It should just
be documented so that userspace and other vhost driver authors
understand what it's supposed to do.
> My preference has been:
>
> 1. If we were to ditch cgroups, then add a new interface that would allow
> us to bind threads to a specific CPU, so that it lines up with the guest's
> mq to CPU mapping.
A 1:1 vCPU/vq->CPU mapping isn't desirable in all cases.
The CPU affinity is a userspace policy decision. The host kernel should
provide a mechanism but not the policy. That way userspace can decide
which workers are shared by multiple vqs and on which physical CPUs they
should run.
Stefan
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
[not found] ` <d6ffcf17-ab12-4830-cc3c-0f0402fb8a0f@oracle.com>
@ 2020-11-19 4:35 ` Jason Wang
0 siblings, 0 replies; 27+ messages in thread
From: Jason Wang @ 2020-11-19 4:35 UTC (permalink / raw)
To: Mike Christie, Stefan Hajnoczi
Cc: fam, linux-scsi, mst, qemu-devel, virtualization, target-devel,
pbonzini
On 2020/11/19 上午4:06, Mike Christie wrote:
> On 11/18/20 1:54 AM, Jason Wang wrote:
>>
>> On 2020/11/18 下午2:57, Mike Christie wrote:
>>> On 11/17/20 11:17 PM, Jason Wang wrote:
>>>> On 2020/11/18 上午12:40, Stefan Hajnoczi wrote:
>>>>> On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
>>>>>> The following kernel patches were made over Michael's vhost branch:
>>>>>>
>>>>>> https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost__;!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NoRys_hU$
>>>>>>
>>>>>> and the vhost-scsi bug fix patchset:
>>>>>>
>>>>>> https://urldefense.com/v3/__https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/*t__;Iw!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NmuPE_m8$
>>>>>>
>>>>>> And the qemu patch was made over the qemu master branch.
>>>>>>
>>>>>> vhost-scsi currently supports multiple queues with the num_queues
>>>>>> setting, but we end up with a setup where the guest's scsi/block
>>>>>> layer can do a queue per vCPU and the layers below vhost can do
>>>>>> a queue per CPU. vhost-scsi will then do a num_queue virtqueues,
>>>>>> but all IO gets set on and completed on a single vhost-scsi thread.
>>>>>> After 2 - 4 vqs this becomes a bottleneck.
>>>>>>
>>>>>> This patchset allows us to create a worker thread per IO vq, so we
>>>>>> can better utilize multiple CPUs with the multiple queues. It
>>>>>> implments Jason's suggestion to create the initial worker like
>>>>>> normal, then create the extra workers for IO vqs with the
>>>>>> VHOST_SET_VRING_ENABLE ioctl command added in this patchset.
>>>>> How does userspace find out the tids and set their CPU affinity?
>>>>>
>>>>> What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It
>>>>> doesn't
>>>>> really "enable" or "disable" the vq, requests are processed
>>>>> regardless.
>>>>
>>>> Actually I think it should do the real "enable/disable" that tries
>>>> to follow the virtio spec.
>>>>
>>> What does real mean here?
>>
>>
>> I think it means when a vq is disabled, vhost won't process any
>> request from that virtqueue.
>>
>>
>>> For the vdpa enable call for example, would it be like
>>> ifcvf_vdpa_set_vq_ready where it sets the ready bit or more like
>>> mlx5_vdpa_set_vq_ready
>>> where it can do some more work in the disable case?
>>
>>
>> For vDPA, it would be more complicated.
>>
>> E.g for IFCVF, it just delay the setting of queue_enable when it get
>> DRIVER_OK. Technically it can passthrough the queue_enable to the
>> hardware as what mlx5e did.
>>
>>
>>>
>>> For net and something like ifcvf_vdpa_set_vq_ready's design would we
>>> have
>>> vhost_ring_ioctl() set some vhost_virtqueue enable bit. We then have
>>> some helper
>>> vhost_vq_is_enabled() and some code to detect if userspace supports
>>> the new ioctl.
>>
>>
>> Yes, vhost support backend capability. When userspace negotiate the
>> new capability, we should depend on SET_VRING_ENABLE, if not we can
>> do vhost_vq_is_enable().
>>
>>
>>> And then in vhost_net_set_backend do we call vhost_vq_is_enabled()?
>>> What is done
>>> for disable then?
>>
>>
>> It needs more thought, but the question is not specific to
>> SET_VRING_ENABLE. Consider guest may zero ring address as well.
>>
>> For disabling, we can simply flush the work and disable all the polls.
>>
>>
>>> It doesn't seem to buy a lot of new functionality. Is it just
>>> so we follow the spec?
>>
>>
>> My understanding is that, since spec defines queue_enable, we should
>> support it in vhost. And we can piggyback the delayed vq creation
>> with this feature. Otherwise we will duplicate the function if we
>> want to support queue_enable.
>
>
> I had actually given up on the delayed vq creation goal. I'm still not
> sure how it's related to ENABLE and I think it gets pretty gross.
>
> 1. If we started from a semi-clean slate, and used the ENABLE ioctl
> more like a CREATE ioctl, and did the ENABLE after vhost dev open()
> but before any other ioctls, we can allocate the vq when we get the
> ENABLE ioctl. This fixes the issue where vhost scsi is allocating 128
> vqs at open() time. We can then allocate metadata like the iovecs at
> ENABLE time or when we get a setup ioctl that is related to the
> metadata, so it fixes that too.
>
> That makes sense how ENABLE is related to delayed vq allocation and
> why we would want it.
>
> If we now need to support old tools though, then you lose me. To try
> and keep the code paths using the same code, then at vhost dev open()
> time do we start vhost_dev_init with zero vqs like with the allocate
> at ENABLE time case? Then when we get the first vring or dev ioctl, do
> we allocate the vq and related metadata? If so, the ENABLE does not
> buy us a lot since we get the delayed allocation from the compat code.
> Also this compat case gets really messy when we are delaying the
> actual vq and not just the metadata.
>
> If for the compat case, we keep the code that before/during
> vhost_dev_init allocates all the vqs and does the initialization, then
> we end up with 2 very very different code paths. And we also need a
> new modparam or something to tell the drivers to do the old or new
> open() behavior.
Right, so I think maybe we can take a step back. Instead of depending on
explicit new ioctl which may cause a lot of issues, can we do something
similar to vhost_vq_is_setup().
That means, let's create/destory new workers on SET_VRING_ADDR?
>
> 2. If we do an approach that is less invasive to the kernel for the
> compat case, and do the ENABLE ioctl after other vring ioctl calls
> then that would not work for the delayed vq allocation goal since the
> ENABLE call is too late.
>
>
>>
>>
>>>
>>> Or do you want it work more like mlx5_vdpa_set_vq_ready? For this in
>>> vhost_ring_ioctl
>>> when we get the new ioctl we would call into the drivers and have it
>>> start queues
>>> and stop queues? For enable, what we you do for net for this case?
>>
>>
>> Net is something different, we can simply use SET_BACKEND to disable
>> a specific virtqueue without introducing new ioctls. Notice that, net
>> mq is kind of different with scsi which have a per queue pair vhost
>> device, and the API allows us to set backend for a specific virtqueue.
>
>
> That's one of the things I am trying to understand. It sounds like
> ENABLE is not useful to net. Will net even use/implement the ENABLE
> ioctl or just use the SET_BACKEND?
I think SET_BACKEND is sufficient for net.
> What about vsock?
For vsock (and scsi as well), their backend is per virtqueue, but the
actual issue is there's no uAPI to configure it per vq. The current uAPI
is per device.
>
> For net it sounds like it's just going to add an extra code path if
> you support it.
Yes, so if we really want one w(which is still questionable during our
discussion). We can start from a SCSI specific one (or an alias of vDPA
one).
>
>
>>
>>
>>> For disable,
>>> would you do something like vhost_net_stop_vq (we don't free up
>>> anything allocated
>>> in vhost_vring_ioctl calls, but we can stop what we setup in the net
>>> driver)?
>>
>>
>> It's up to you, if you think you should free the resources you can do
>> that.
>>
>>
>>> Is this useful for the current net mq design or is this for
>>> something like where
>>> you would do one vhost net device with multiple vqs?
>>
>>
>> I think SET_VRING_ENABLE is more useful for SCSI since it have a
>> model of multiple vqs per vhost device.
>
> That is why I was asking about if you were going to change net.
>
> It would have been useful for scsi if we had it when mq support was
> added and we don't have to support old tools. But now, if enable=true,
> is only going to be something where we set some bit so later when
> VHOST_SCSI_SET_ENDPOINT is run it we can do what we are already doing
> its just extra code. This patch:
> https://www.spinics.net/lists/linux-scsi/msg150151.html
> would work without the ENABLE ioctl I mean.
That seems to pre-allocate all workers. If we don't care the resources
(127 workers) consumption it could be fine.
>
>
> And if you guys want to do the completely new interface, then none of
> this matters I guess :)
>
> For disable see below.
>
>>
>>
>>>
>>> My issue/convern is that in general these calls seems useful, but we
>>> don't really
>>> need them for scsi because vhost scsi is already stuck creating vqs
>>> like how it does
>>> due to existing users. If we do the ifcvf_vdpa_set_vq_ready type of
>>> design where
>>> we just set some bit, then the new ioctl does not give us a lot.
>>> It's just an extra
>>> check and extra code.
>>>
>>> And for the mlx5_vdpa_set_vq_ready type of design, it doesn't seem
>>> like it's going
>>> to happen a lot where the admin is going to want to remove vqs from
>>> a running device.
>>
>>
>> In this case, qemu may just disable the queues of vhost-scsi via
>> SET_VRING_ENABLE and then we can free resources?
>
>
> Some SCSI background in case it doesn't work like net:
> -------
> When the user sets up mq for vhost-scsi/virtio-scsi, for max perf and
> no cares about mem use they would normally set num_queues based on the
> number of vCPUs and MSI-x vectors. I think the default in qemu now is
> to try and detect that value.
>
> When the virtio_scsi driver is loaded into the guest kernel, it takes
> the num_queues value and tells the scsi/block mq layer to create
> num_queues multiqueue hw queues.
If I read the code correctly, for modern device, guest will set
queue_enable for the queues that it wants to use. So in this ideal case,
qemu can forward them to VRING_ENABLE and reset VRING_ENABLE during
device reset.
But it would be complicated to support legacy device and qemu.
>
> ------
>
> I was trying to say in the previous email that is if all we do is set
> some bits to indicate the queue is disabled, free its resources, stop
> polling/queueing in the scsi/target layer, flush etc, it does not seem
> useful. I was trying to ask when would a user only want this behavior?
I think it's device reset, the semantic is that unless the queue is
enabled, we should treat it as disabled.
>
> I think we need an extra piece where the guests needs to be modified
> to handle the queue removal or the block/scsi layers would still send
> IO and we would get IO errors. Without this it seems like some extra
> code that we will not use.
>
> And then if we are going to make disable useful like this, what about
> enable? We would want to the reverse where we add the queue and the
> guest remaps the mq to hw queue layout. To do this, enable has to do
> more than just set some bits. There is also an issue with how it would
> need to interact with the SET_BACKEND
> (VHOST_SCSI_SET_ENDPOINT/VHOST_SCSI_CLEAR_ENDPOINT for scsi) calls.
>
> I think if we wanted the ENABLE ioctl to work like this then that is
> not related to my patches and I like I've written before I think my
> patches do not need the ENABLE ioctl in general. We could add the
> patch where we create the workers threads from
> VHOST_SCSI_SET_ENDPOINT. And if we ever add this queue hotplug type of
> code, then the worker thread would just get moved/rearranged with the
> other vq modification code in
> vhost_scsi_set_endpoint/vhost_scsi_clear_endpoint.
>
> We could also go the new threading interface route, and also do the
> ENABLE ioctl separately.
Right, my original idea is to try to make queue_enable (in the spec)
work for SCSI and we can use that for any delayed stuffs (vq, or workers).
But it looks not as easy as I imaged.
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
2020-11-18 9:54 ` Michael S. Tsirkin
@ 2020-11-19 14:00 ` Stefan Hajnoczi
0 siblings, 0 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-19 14:00 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: fam, linux-scsi, qemu-devel, virtualization, target-devel,
pbonzini, Mike Christie
[-- Attachment #1.1: Type: text/plain, Size: 2747 bytes --]
On Wed, Nov 18, 2020 at 04:54:07AM -0500, Michael S. Tsirkin wrote:
> On Tue, Nov 17, 2020 at 01:13:14PM -0600, Mike Christie wrote:
> > On 11/17/20 10:40 AM, Stefan Hajnoczi wrote:
> > > On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
> > >> The following kernel patches were made over Michael's vhost branch:
> > >>
> > >> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost
> > >>
> > >> and the vhost-scsi bug fix patchset:
> > >>
> > >> https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t
> > >>
> > >> And the qemu patch was made over the qemu master branch.
> > >>
> > >> vhost-scsi currently supports multiple queues with the num_queues
> > >> setting, but we end up with a setup where the guest's scsi/block
> > >> layer can do a queue per vCPU and the layers below vhost can do
> > >> a queue per CPU. vhost-scsi will then do a num_queue virtqueues,
> > >> but all IO gets set on and completed on a single vhost-scsi thread.
> > >> After 2 - 4 vqs this becomes a bottleneck.
> > >>
> > >> This patchset allows us to create a worker thread per IO vq, so we
> > >> can better utilize multiple CPUs with the multiple queues. It
> > >> implments Jason's suggestion to create the initial worker like
> > >> normal, then create the extra workers for IO vqs with the
> > >> VHOST_SET_VRING_ENABLE ioctl command added in this patchset.
> > >
> > > How does userspace find out the tids and set their CPU affinity?
> > >
> >
> > When we create the worker thread we add it to the device owner's cgroup,
> > so we end up inheriting those settings like affinity.
> >
> > However, are you more asking about finer control like if the guest is
> > doing mq, and the mq hw queue is bound to cpu0, it would perform
> > better if we could bind vhost vq's worker thread to cpu0? I think the
> > problem might is if you are in the cgroup then we can't set a specific
> > threads CPU affinity to just one specific CPU. So you can either do
> > cgroups or not.
>
> Something we wanted to try for a while is to allow userspace
> to create threads for us, then specify which vqs it processes.
Do you mean an interface like a blocking ioctl(vhost_fd,
VHOST_WORKER_RUN) where the vhost processing is done in the context of
the caller's userspace thread?
What is neat about this is that it removes thread configuration from the
kernel vhost code. On the other hand, userspace still needs an interface
indicating which vqs should be processed. Maybe it would even require an
int worker_fd = ioctl(vhost_fd, VHOST_WORKER_CREATE) and then
ioctl(worker_fd, VHOST_WORKER_BIND_VQ, vq_idx)? So then it becomes
complex again...
Stefan
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
2020-11-18 11:31 ` Stefan Hajnoczi
@ 2020-11-19 14:46 ` Michael S. Tsirkin
[not found] ` <ceebdc90-3ffc-1563-ff85-12a848bcba18@oracle.com>
0 siblings, 1 reply; 27+ messages in thread
From: Michael S. Tsirkin @ 2020-11-19 14:46 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: fam, linux-scsi, qemu-devel, virtualization, target-devel,
pbonzini, Mike Christie
On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote:
> > My preference has been:
> >
> > 1. If we were to ditch cgroups, then add a new interface that would allow
> > us to bind threads to a specific CPU, so that it lines up with the guest's
> > mq to CPU mapping.
>
> A 1:1 vCPU/vq->CPU mapping isn't desirable in all cases.
>
> The CPU affinity is a userspace policy decision. The host kernel should
> provide a mechanism but not the policy. That way userspace can decide
> which workers are shared by multiple vqs and on which physical CPUs they
> should run.
So if we let userspace dictate the threading policy then I think binding
vqs to userspace threads and running there makes the most sense,
no need to create the threads.
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
[not found] ` <ceebdc90-3ffc-1563-ff85-12a848bcba18@oracle.com>
@ 2020-11-19 16:24 ` Stefan Hajnoczi
[not found] ` <ffd88f0c-981e-a102-4b08-f29d6b9a0f71@oracle.com>
0 siblings, 1 reply; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-19 16:24 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, Michael S. Tsirkin, qemu-devel,
Linux Virtualization, target-devel, Stefan Hajnoczi,
Paolo Bonzini
On Thu, Nov 19, 2020 at 4:13 PM Mike Christie
<michael.christie@oracle.com> wrote:
>
> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote:
> > On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote:
> >>> My preference has been:
> >>>
> >>> 1. If we were to ditch cgroups, then add a new interface that would allow
> >>> us to bind threads to a specific CPU, so that it lines up with the guest's
> >>> mq to CPU mapping.
> >>
> >> A 1:1 vCPU/vq->CPU mapping isn't desirable in all cases.
> >>
> >> The CPU affinity is a userspace policy decision. The host kernel should
> >> provide a mechanism but not the policy. That way userspace can decide
> >> which workers are shared by multiple vqs and on which physical CPUs they
> >> should run.
> >
> > So if we let userspace dictate the threading policy then I think binding
> > vqs to userspace threads and running there makes the most sense,
> > no need to create the threads.
> >
>
> Just to make sure I am on the same page, in one of the first postings of
> this set at the bottom of the mail:
>
> https://www.spinics.net/lists/linux-scsi/msg148322.html
>
> I asked about a new interface and had done something more like what
> Stefan posted:
>
> struct vhost_vq_worker_info {
> /*
> * The pid of an existing vhost worker that this vq will be
> * assigned to. When pid is 0 the virtqueue is assigned to the
> * default vhost worker. When pid is -1 a new worker thread is
> * created for this virtqueue. When pid is -2 the virtqueue's
> * worker thread is unchanged.
> *
> * If a vhost worker no longer has any virtqueues assigned to it
> * then it will terminate.
> *
> * The pid of the vhost worker is stored to this field when the
> * ioctl completes successfully. Use pid -2 to query the current
> * vhost worker pid.
> */
> __kernel_pid_t pid; /* in/out */
>
> /* The virtqueue index*/
> unsigned int vq_idx; /* in */
> };
>
> This approach is simple and it allowed me to have userspace map queues
> and threads optimally for our setups.
>
> Note: Stefan, in response to your previous comment, I am just using my
> 1:1 mapping as an example and would make it configurable from userspace.
>
> In the email above are you guys suggesting to execute the SCSI/vhost
> requests in userspace? We should not do that because:
>
> 1. It negates part of what makes vhost fast where we do not have to kick
> out to userspace then back to the kernel.
>
> 2. It's not doable or becomes a crazy mess because vhost-scsi is tied to
> the scsi/target layer in the kernel. You can't process the scsi command
> in userspace since the scsi state machine and all its configuration info
> is in the kernel's scsi/target layer.
>
> For example, I was just the maintainer of the target_core_user module
> that hooks into LIO/target on the backend (vhost-scsi hooks in on the
> front end) and passes commands to userspace and there we have a
> semi-shadow state machine. It gets nasty to try and maintain/sync state
> between lio/target core in the kernel and in userspace. We also see the
> perf loss I mentioned in #1.
No, if I understand Michael correctly he has suggested a different approach.
My suggestion was that the kernel continues to manage the worker
threads but an ioctl allows userspace to control the policy.
I think Michael is saying that the kernel shouldn't manage/create
threads. Userspace should create threads and then invoke an ioctl from
those threads.
The ioctl will call into the vhost driver where it will execute
something similar to vhost_worker(). So this ioctl will block while
the kernel is using the thread to process vqs.
What isn't clear to me is how to tell the kernel which vqs are
processed by a thread. We could try to pass that information into the
ioctl. I'm not sure what the cleanest solution is here.
Maybe something like:
struct vhost_run_worker_info {
struct timespec *timeout;
sigset_t *sigmask;
/* List of virtqueues to process */
unsigned nvqs;
unsigned vqs[];
};
/* This blocks until the timeout is reached, a signal is received, or
the vhost device is destroyed */
int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info);
As you can see, userspace isn't involved with dealing with the
requests. It just acts as a thread donor to the vhost driver.
We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the
penalty of switching into the kernel, copying in the arguments, etc.
Michael: is this the kind of thing you were thinking of?
Stefan
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
[not found] ` <ffd88f0c-981e-a102-4b08-f29d6b9a0f71@oracle.com>
@ 2020-11-19 17:08 ` Stefan Hajnoczi
2020-11-20 8:45 ` Stefan Hajnoczi
0 siblings, 1 reply; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-19 17:08 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, Michael S. Tsirkin, qemu-devel,
Linux Virtualization, target-devel, Stefan Hajnoczi,
Paolo Bonzini
On Thu, Nov 19, 2020 at 4:43 PM Mike Christie
<michael.christie@oracle.com> wrote:
>
> On 11/19/20 10:24 AM, Stefan Hajnoczi wrote:
> > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie
> > <michael.christie@oracle.com> wrote:
> >>
> >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote:
> >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote:
> > struct vhost_run_worker_info {
> > struct timespec *timeout;
> > sigset_t *sigmask;
> >
> > /* List of virtqueues to process */
> > unsigned nvqs;
> > unsigned vqs[];
> > };
> >
> > /* This blocks until the timeout is reached, a signal is received, or
> > the vhost device is destroyed */
> > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info);
> >
> > As you can see, userspace isn't involved with dealing with the
> > requests. It just acts as a thread donor to the vhost driver.
> >
> > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the
> > penalty of switching into the kernel, copying in the arguments, etc.
>
> I didn't get this part. Why have the timeout? When the timeout expires,
> does userspace just call right back down to the kernel or does it do
> some sort of processing/operation?
>
> You could have your worker function run from that ioctl wait for a
> signal or a wake up call from the vhost_work/poll functions.
An optional timeout argument is common in blocking interfaces like
poll(2), recvmmsg(2), etc.
Although something can send a signal to the thread instead,
implementing that in an application is more awkward than passing a
struct timespec.
Compared to other blocking calls we don't expect
ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will
rarely be used and can be dropped from the interface.
BTW the code I posted wasn't a carefully thought out proposal :). The
details still need to be considered and I'm going to be offline for
the next week so maybe someone else can think it through in the
meantime.
Stefan
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
2020-11-19 17:08 ` Stefan Hajnoczi
@ 2020-11-20 8:45 ` Stefan Hajnoczi
2020-11-20 12:31 ` Michael S. Tsirkin
2020-11-23 15:17 ` Stefano Garzarella
0 siblings, 2 replies; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-11-20 8:45 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, Michael S. Tsirkin, qemu-devel,
Linux Virtualization, target-devel, Stefan Hajnoczi,
Paolo Bonzini
On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Thu, Nov 19, 2020 at 4:43 PM Mike Christie
> <michael.christie@oracle.com> wrote:
> >
> > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote:
> > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie
> > > <michael.christie@oracle.com> wrote:
> > >>
> > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote:
> > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote:
> > > struct vhost_run_worker_info {
> > > struct timespec *timeout;
> > > sigset_t *sigmask;
> > >
> > > /* List of virtqueues to process */
> > > unsigned nvqs;
> > > unsigned vqs[];
> > > };
> > >
> > > /* This blocks until the timeout is reached, a signal is received, or
> > > the vhost device is destroyed */
> > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info);
> > >
> > > As you can see, userspace isn't involved with dealing with the
> > > requests. It just acts as a thread donor to the vhost driver.
> > >
> > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the
> > > penalty of switching into the kernel, copying in the arguments, etc.
> >
> > I didn't get this part. Why have the timeout? When the timeout expires,
> > does userspace just call right back down to the kernel or does it do
> > some sort of processing/operation?
> >
> > You could have your worker function run from that ioctl wait for a
> > signal or a wake up call from the vhost_work/poll functions.
>
> An optional timeout argument is common in blocking interfaces like
> poll(2), recvmmsg(2), etc.
>
> Although something can send a signal to the thread instead,
> implementing that in an application is more awkward than passing a
> struct timespec.
>
> Compared to other blocking calls we don't expect
> ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will
> rarely be used and can be dropped from the interface.
>
> BTW the code I posted wasn't a carefully thought out proposal :). The
> details still need to be considered and I'm going to be offline for
> the next week so maybe someone else can think it through in the
> meantime.
One final thought before I'm offline for a week. If
ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance
then it's hard to support poll-mode (busy waiting) workers because
each device instance consumes a whole CPU. If we stick to an interface
where the kernel manages the worker threads then it's easier to share
workers between devices for polling.
I have CCed Stefano Garzarella, who is looking at similar designs for
vDPA software device implementations.
Stefan
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
2020-11-20 8:45 ` Stefan Hajnoczi
@ 2020-11-20 12:31 ` Michael S. Tsirkin
2020-12-01 12:59 ` Stefan Hajnoczi
2020-11-23 15:17 ` Stefano Garzarella
1 sibling, 1 reply; 27+ messages in thread
From: Michael S. Tsirkin @ 2020-11-20 12:31 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: fam, linux-scsi, qemu-devel, Linux Virtualization, target-devel,
Stefan Hajnoczi, Paolo Bonzini, Mike Christie
On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote:
> On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> >
> > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie
> > <michael.christie@oracle.com> wrote:
> > >
> > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote:
> > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie
> > > > <michael.christie@oracle.com> wrote:
> > > >>
> > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote:
> > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote:
> > > > struct vhost_run_worker_info {
> > > > struct timespec *timeout;
> > > > sigset_t *sigmask;
> > > >
> > > > /* List of virtqueues to process */
> > > > unsigned nvqs;
> > > > unsigned vqs[];
> > > > };
> > > >
> > > > /* This blocks until the timeout is reached, a signal is received, or
> > > > the vhost device is destroyed */
> > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info);
> > > >
> > > > As you can see, userspace isn't involved with dealing with the
> > > > requests. It just acts as a thread donor to the vhost driver.
> > > >
> > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the
> > > > penalty of switching into the kernel, copying in the arguments, etc.
> > >
> > > I didn't get this part. Why have the timeout? When the timeout expires,
> > > does userspace just call right back down to the kernel or does it do
> > > some sort of processing/operation?
> > >
> > > You could have your worker function run from that ioctl wait for a
> > > signal or a wake up call from the vhost_work/poll functions.
> >
> > An optional timeout argument is common in blocking interfaces like
> > poll(2), recvmmsg(2), etc.
> >
> > Although something can send a signal to the thread instead,
> > implementing that in an application is more awkward than passing a
> > struct timespec.
> >
> > Compared to other blocking calls we don't expect
> > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will
> > rarely be used and can be dropped from the interface.
> >
> > BTW the code I posted wasn't a carefully thought out proposal :). The
> > details still need to be considered and I'm going to be offline for
> > the next week so maybe someone else can think it through in the
> > meantime.
>
> One final thought before I'm offline for a week. If
> ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance
> then it's hard to support poll-mode (busy waiting) workers because
> each device instance consumes a whole CPU. If we stick to an interface
> where the kernel manages the worker threads then it's easier to share
> workers between devices for polling.
Yes that is the reason vhost did its own reason in the first place.
I am vaguely thinking about poll(2) or a similar interface,
which can wait for an event on multiple FDs.
> I have CCed Stefano Garzarella, who is looking at similar designs for
> vDPA software device implementations.
>
> Stefan
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
2020-11-20 8:45 ` Stefan Hajnoczi
2020-11-20 12:31 ` Michael S. Tsirkin
@ 2020-11-23 15:17 ` Stefano Garzarella
1 sibling, 0 replies; 27+ messages in thread
From: Stefano Garzarella @ 2020-11-23 15:17 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: fam, linux-scsi, Michael S. Tsirkin, qemu-devel,
Linux Virtualization, target-devel, Stefan Hajnoczi,
Paolo Bonzini, Mike Christie
On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote:
>On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>>
>> On Thu, Nov 19, 2020 at 4:43 PM Mike Christie
>> <michael.christie@oracle.com> wrote:
>> >
>> > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote:
>> > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie
>> > > <michael.christie@oracle.com> wrote:
>> > >>
>> > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote:
>> > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote:
>> > > struct vhost_run_worker_info {
>> > > struct timespec *timeout;
>> > > sigset_t *sigmask;
>> > >
>> > > /* List of virtqueues to process */
>> > > unsigned nvqs;
>> > > unsigned vqs[];
>> > > };
>> > >
>> > > /* This blocks until the timeout is reached, a signal is received, or
>> > > the vhost device is destroyed */
>> > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info);
>> > >
>> > > As you can see, userspace isn't involved with dealing with the
>> > > requests. It just acts as a thread donor to the vhost driver.
>> > >
>> > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the
>> > > penalty of switching into the kernel, copying in the arguments, etc.
>> >
>> > I didn't get this part. Why have the timeout? When the timeout expires,
>> > does userspace just call right back down to the kernel or does it do
>> > some sort of processing/operation?
>> >
>> > You could have your worker function run from that ioctl wait for a
>> > signal or a wake up call from the vhost_work/poll functions.
>>
>> An optional timeout argument is common in blocking interfaces like
>> poll(2), recvmmsg(2), etc.
>>
>> Although something can send a signal to the thread instead,
>> implementing that in an application is more awkward than passing a
>> struct timespec.
>>
>> Compared to other blocking calls we don't expect
>> ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will
>> rarely be used and can be dropped from the interface.
>>
>> BTW the code I posted wasn't a carefully thought out proposal :). The
>> details still need to be considered and I'm going to be offline for
>> the next week so maybe someone else can think it through in the
>> meantime.
>
>One final thought before I'm offline for a week. If
>ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance
>then it's hard to support poll-mode (busy waiting) workers because
>each device instance consumes a whole CPU. If we stick to an interface
>where the kernel manages the worker threads then it's easier to share
>workers between devices for polling.
Agree, ioctl(VHOST_RUN_WORKER) is interesting and perhaps simplifies
thread management (pinning, etc.), but with kthread would be easier to
implement polling sharing worker with multiple devices.
>
>I have CCed Stefano Garzarella, who is looking at similar designs for
>vDPA software device implementations.
Thanks, Mike please can you keep me in CC for this work?
It's really interesting since I'll have similar issues to solve with
vDPA software device.
Thanks,
Stefano
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
2020-11-20 12:31 ` Michael S. Tsirkin
@ 2020-12-01 12:59 ` Stefan Hajnoczi
2020-12-01 13:45 ` Stefano Garzarella
0 siblings, 1 reply; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-12-01 12:59 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: fam, linux-scsi, qemu-devel, Linux Virtualization, target-devel,
Paolo Bonzini, Mike Christie
[-- Attachment #1.1: Type: text/plain, Size: 4713 bytes --]
On Fri, Nov 20, 2020 at 07:31:08AM -0500, Michael S. Tsirkin wrote:
> On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote:
> > On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > >
> > > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie
> > > <michael.christie@oracle.com> wrote:
> > > >
> > > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote:
> > > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie
> > > > > <michael.christie@oracle.com> wrote:
> > > > >>
> > > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote:
> > > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote:
> > > > > struct vhost_run_worker_info {
> > > > > struct timespec *timeout;
> > > > > sigset_t *sigmask;
> > > > >
> > > > > /* List of virtqueues to process */
> > > > > unsigned nvqs;
> > > > > unsigned vqs[];
> > > > > };
> > > > >
> > > > > /* This blocks until the timeout is reached, a signal is received, or
> > > > > the vhost device is destroyed */
> > > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info);
> > > > >
> > > > > As you can see, userspace isn't involved with dealing with the
> > > > > requests. It just acts as a thread donor to the vhost driver.
> > > > >
> > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the
> > > > > penalty of switching into the kernel, copying in the arguments, etc.
> > > >
> > > > I didn't get this part. Why have the timeout? When the timeout expires,
> > > > does userspace just call right back down to the kernel or does it do
> > > > some sort of processing/operation?
> > > >
> > > > You could have your worker function run from that ioctl wait for a
> > > > signal or a wake up call from the vhost_work/poll functions.
> > >
> > > An optional timeout argument is common in blocking interfaces like
> > > poll(2), recvmmsg(2), etc.
> > >
> > > Although something can send a signal to the thread instead,
> > > implementing that in an application is more awkward than passing a
> > > struct timespec.
> > >
> > > Compared to other blocking calls we don't expect
> > > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will
> > > rarely be used and can be dropped from the interface.
> > >
> > > BTW the code I posted wasn't a carefully thought out proposal :). The
> > > details still need to be considered and I'm going to be offline for
> > > the next week so maybe someone else can think it through in the
> > > meantime.
> >
> > One final thought before I'm offline for a week. If
> > ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance
> > then it's hard to support poll-mode (busy waiting) workers because
> > each device instance consumes a whole CPU. If we stick to an interface
> > where the kernel manages the worker threads then it's easier to share
> > workers between devices for polling.
>
>
> Yes that is the reason vhost did its own reason in the first place.
>
>
> I am vaguely thinking about poll(2) or a similar interface,
> which can wait for an event on multiple FDs.
I can imagine how using poll(2) would work from a userspace perspective,
but on the kernel side I don't think it can be implemented cleanly.
poll(2) is tied to the file_operations->poll() callback and
read/write/error events. Not to mention there isn't a way to substitue
the vhost worker thread function instead of scheduling out the current
thread while waiting for poll fd events.
But maybe ioctl(VHOST_WORKER_RUN) can do it:
struct vhost_run_worker_dev {
int vhostfd; /* /dev/vhost-TYPE fd */
unsigned nvqs; /* number of virtqueues in vqs[] */
unsigned vqs[]; /* virtqueues to process */
};
struct vhost_run_worker_info {
struct timespec *timeout;
sigset_t *sigmask;
unsigned ndevices;
struct vhost_run_worker_dev *devices[];
};
In the simple case userspace sets ndevices to 1 and we just handle
virtqueues for the current device.
In the fancier shared worker thread case the userspace process has the
vhost fds of all the devices it is processing and passes them to
ioctl(VHOST_WORKER_RUN) via struct vhost_run_worker_dev elements.
From a security perspective it means the userspace thread has access to
all vhost devices (because it has their fds).
I'm not sure how the mm is supposed to work. The devices might be
associated with different userspace processes (guests) and therefore
have different virtual memory.
Just wanted to push this discussion along a little further. I'm buried
under emails and probably wont be very active over the next few days.
Stefan
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
2020-12-01 12:59 ` Stefan Hajnoczi
@ 2020-12-01 13:45 ` Stefano Garzarella
2020-12-01 17:43 ` Stefan Hajnoczi
0 siblings, 1 reply; 27+ messages in thread
From: Stefano Garzarella @ 2020-12-01 13:45 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: fam, linux-scsi, Michael S. Tsirkin, qemu-devel,
Linux Virtualization, target-devel, Paolo Bonzini, Mike Christie
On Tue, Dec 01, 2020 at 12:59:43PM +0000, Stefan Hajnoczi wrote:
>On Fri, Nov 20, 2020 at 07:31:08AM -0500, Michael S. Tsirkin wrote:
>> On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote:
>> > On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>> > >
>> > > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie
>> > > <michael.christie@oracle.com> wrote:
>> > > >
>> > > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote:
>> > > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie
>> > > > > <michael.christie@oracle.com> wrote:
>> > > > >>
>> > > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote:
>> > > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote:
>> > > > > struct vhost_run_worker_info {
>> > > > > struct timespec *timeout;
>> > > > > sigset_t *sigmask;
>> > > > >
>> > > > > /* List of virtqueues to process */
>> > > > > unsigned nvqs;
>> > > > > unsigned vqs[];
>> > > > > };
>> > > > >
>> > > > > /* This blocks until the timeout is reached, a signal is received, or
>> > > > > the vhost device is destroyed */
>> > > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info);
>> > > > >
>> > > > > As you can see, userspace isn't involved with dealing with the
>> > > > > requests. It just acts as a thread donor to the vhost driver.
>> > > > >
>> > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the
>> > > > > penalty of switching into the kernel, copying in the arguments, etc.
>> > > >
>> > > > I didn't get this part. Why have the timeout? When the timeout expires,
>> > > > does userspace just call right back down to the kernel or does it do
>> > > > some sort of processing/operation?
>> > > >
>> > > > You could have your worker function run from that ioctl wait for a
>> > > > signal or a wake up call from the vhost_work/poll functions.
>> > >
>> > > An optional timeout argument is common in blocking interfaces like
>> > > poll(2), recvmmsg(2), etc.
>> > >
>> > > Although something can send a signal to the thread instead,
>> > > implementing that in an application is more awkward than passing a
>> > > struct timespec.
>> > >
>> > > Compared to other blocking calls we don't expect
>> > > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will
>> > > rarely be used and can be dropped from the interface.
>> > >
>> > > BTW the code I posted wasn't a carefully thought out proposal :). The
>> > > details still need to be considered and I'm going to be offline for
>> > > the next week so maybe someone else can think it through in the
>> > > meantime.
>> >
>> > One final thought before I'm offline for a week. If
>> > ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance
>> > then it's hard to support poll-mode (busy waiting) workers because
>> > each device instance consumes a whole CPU. If we stick to an interface
>> > where the kernel manages the worker threads then it's easier to share
>> > workers between devices for polling.
>>
>>
>> Yes that is the reason vhost did its own reason in the first place.
>>
>>
>> I am vaguely thinking about poll(2) or a similar interface,
>> which can wait for an event on multiple FDs.
>
>I can imagine how using poll(2) would work from a userspace perspective,
>but on the kernel side I don't think it can be implemented cleanly.
>poll(2) is tied to the file_operations->poll() callback and
>read/write/error events. Not to mention there isn't a way to substitue
>the vhost worker thread function instead of scheduling out the current
>thread while waiting for poll fd events.
>
>But maybe ioctl(VHOST_WORKER_RUN) can do it:
>
> struct vhost_run_worker_dev {
> int vhostfd; /* /dev/vhost-TYPE fd */
> unsigned nvqs; /* number of virtqueues in vqs[] */
> unsigned vqs[]; /* virtqueues to process */
> };
>
> struct vhost_run_worker_info {
> struct timespec *timeout;
> sigset_t *sigmask;
>
> unsigned ndevices;
> struct vhost_run_worker_dev *devices[];
> };
>
>In the simple case userspace sets ndevices to 1 and we just handle
>virtqueues for the current device.
>
>In the fancier shared worker thread case the userspace process has the
>vhost fds of all the devices it is processing and passes them to
>ioctl(VHOST_WORKER_RUN) via struct vhost_run_worker_dev elements.
Which fd will be used for this IOCTL? One of the 'vhostfd' or we should
create a new /dev/vhost-workers (or something similar)?
Maybe the new device will be cleaner and can be reused also for other
stuff (I'm thinking about vDPA software devices).
>
>From a security perspective it means the userspace thread has access to
>all vhost devices (because it has their fds).
>
>I'm not sure how the mm is supposed to work. The devices might be
>associated with different userspace processes (guests) and therefore
>have different virtual memory.
Maybe in this case we should do something similar to io_uring SQPOLL
kthread where kthread_use_mm()/kthread_unuse_mm() is used to switch
virtual memory spaces.
After writing, I saw that we already do it this in the vhost_worker() in
drivers/vhost/vhost.c
>
>Just wanted to push this discussion along a little further. I'm buried
>under emails and probably wont be very active over the next few days.
>
I think ioctl(VHOST_WORKER_RUN) might be the right way and also maybe
the least difficult one.
Thanks,
Stefano
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
2020-12-01 13:45 ` Stefano Garzarella
@ 2020-12-01 17:43 ` Stefan Hajnoczi
2020-12-02 10:35 ` Stefano Garzarella
0 siblings, 1 reply; 27+ messages in thread
From: Stefan Hajnoczi @ 2020-12-01 17:43 UTC (permalink / raw)
To: Stefano Garzarella
Cc: fam, linux-scsi, Michael S. Tsirkin, qemu-devel,
Linux Virtualization, target-devel, Paolo Bonzini, Mike Christie
[-- Attachment #1.1: Type: text/plain, Size: 6140 bytes --]
On Tue, Dec 01, 2020 at 02:45:18PM +0100, Stefano Garzarella wrote:
> On Tue, Dec 01, 2020 at 12:59:43PM +0000, Stefan Hajnoczi wrote:
> > On Fri, Nov 20, 2020 at 07:31:08AM -0500, Michael S. Tsirkin wrote:
> > > On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote:
> > > > On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > >
> > > > > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie
> > > > > <michael.christie@oracle.com> wrote:
> > > > > >
> > > > > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote:
> > > > > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie
> > > > > > > <michael.christie@oracle.com> wrote:
> > > > > > >>
> > > > > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote:
> > > > > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote:
> > > > > > > struct vhost_run_worker_info {
> > > > > > > struct timespec *timeout;
> > > > > > > sigset_t *sigmask;
> > > > > > >
> > > > > > > /* List of virtqueues to process */
> > > > > > > unsigned nvqs;
> > > > > > > unsigned vqs[];
> > > > > > > };
> > > > > > >
> > > > > > > /* This blocks until the timeout is reached, a signal is received, or
> > > > > > > the vhost device is destroyed */
> > > > > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info);
> > > > > > >
> > > > > > > As you can see, userspace isn't involved with dealing with the
> > > > > > > requests. It just acts as a thread donor to the vhost driver.
> > > > > > >
> > > > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the
> > > > > > > penalty of switching into the kernel, copying in the arguments, etc.
> > > > > >
> > > > > > I didn't get this part. Why have the timeout? When the timeout expires,
> > > > > > does userspace just call right back down to the kernel or does it do
> > > > > > some sort of processing/operation?
> > > > > >
> > > > > > You could have your worker function run from that ioctl wait for a
> > > > > > signal or a wake up call from the vhost_work/poll functions.
> > > > >
> > > > > An optional timeout argument is common in blocking interfaces like
> > > > > poll(2), recvmmsg(2), etc.
> > > > >
> > > > > Although something can send a signal to the thread instead,
> > > > > implementing that in an application is more awkward than passing a
> > > > > struct timespec.
> > > > >
> > > > > Compared to other blocking calls we don't expect
> > > > > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will
> > > > > rarely be used and can be dropped from the interface.
> > > > >
> > > > > BTW the code I posted wasn't a carefully thought out proposal :). The
> > > > > details still need to be considered and I'm going to be offline for
> > > > > the next week so maybe someone else can think it through in the
> > > > > meantime.
> > > >
> > > > One final thought before I'm offline for a week. If
> > > > ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance
> > > > then it's hard to support poll-mode (busy waiting) workers because
> > > > each device instance consumes a whole CPU. If we stick to an interface
> > > > where the kernel manages the worker threads then it's easier to share
> > > > workers between devices for polling.
> > >
> > >
> > > Yes that is the reason vhost did its own reason in the first place.
> > >
> > >
> > > I am vaguely thinking about poll(2) or a similar interface,
> > > which can wait for an event on multiple FDs.
> >
> > I can imagine how using poll(2) would work from a userspace perspective,
> > but on the kernel side I don't think it can be implemented cleanly.
> > poll(2) is tied to the file_operations->poll() callback and
> > read/write/error events. Not to mention there isn't a way to substitue
> > the vhost worker thread function instead of scheduling out the current
> > thread while waiting for poll fd events.
> >
> > But maybe ioctl(VHOST_WORKER_RUN) can do it:
> >
> > struct vhost_run_worker_dev {
> > int vhostfd; /* /dev/vhost-TYPE fd */
> > unsigned nvqs; /* number of virtqueues in vqs[] */
> > unsigned vqs[]; /* virtqueues to process */
> > };
> >
> > struct vhost_run_worker_info {
> > struct timespec *timeout;
> > sigset_t *sigmask;
> >
> > unsigned ndevices;
> > struct vhost_run_worker_dev *devices[];
> > };
> >
> > In the simple case userspace sets ndevices to 1 and we just handle
> > virtqueues for the current device.
> >
> > In the fancier shared worker thread case the userspace process has the
> > vhost fds of all the devices it is processing and passes them to
> > ioctl(VHOST_WORKER_RUN) via struct vhost_run_worker_dev elements.
>
> Which fd will be used for this IOCTL? One of the 'vhostfd' or we should
> create a new /dev/vhost-workers (or something similar)?
>
> Maybe the new device will be cleaner and can be reused also for other stuff
> (I'm thinking about vDPA software devices).
>
> >
> > From a security perspective it means the userspace thread has access to
> > all vhost devices (because it has their fds).
> >
> > I'm not sure how the mm is supposed to work. The devices might be
> > associated with different userspace processes (guests) and therefore
> > have different virtual memory.
>
> Maybe in this case we should do something similar to io_uring SQPOLL kthread
> where kthread_use_mm()/kthread_unuse_mm() is used to switch virtual memory
> spaces.
>
> After writing, I saw that we already do it this in the vhost_worker() in
> drivers/vhost/vhost.c
>
> >
> > Just wanted to push this discussion along a little further. I'm buried
> > under emails and probably wont be very active over the next few days.
> >
>
> I think ioctl(VHOST_WORKER_RUN) might be the right way and also maybe the
> least difficult one.
Sending an ioctl API proposal email could help progress this discussion.
Interesting questions:
1. How to specify which virtqueues to process (Mike's use case)?
2. How to process multiple devices?
Stefan
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/1] qemu vhost scsi: add VHOST_SET_VRING_ENABLE support
[not found] ` <1605223150-10888-2-git-send-email-michael.christie@oracle.com>
2020-11-17 11:53 ` [PATCH 1/1] qemu vhost scsi: add VHOST_SET_VRING_ENABLE support Stefan Hajnoczi
@ 2020-12-02 9:59 ` Michael S. Tsirkin
1 sibling, 0 replies; 27+ messages in thread
From: Michael S. Tsirkin @ 2020-12-02 9:59 UTC (permalink / raw)
To: Mike Christie
Cc: fam, linux-scsi, qemu-devel, virtualization, target-devel,
stefanha, pbonzini
On Thu, Nov 12, 2020 at 05:19:00PM -0600, Mike Christie wrote:
> diff --git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h
> index 7523218..98dd919 100644
> --- a/linux-headers/linux/vhost.h
> +++ b/linux-headers/linux/vhost.h
> @@ -70,6 +70,7 @@
> #define VHOST_VRING_BIG_ENDIAN 1
> #define VHOST_SET_VRING_ENDIAN _IOW(VHOST_VIRTIO, 0x13, struct vhost_vring_state)
> #define VHOST_GET_VRING_ENDIAN _IOW(VHOST_VIRTIO, 0x14, struct vhost_vring_state)
> +#define VHOST_SET_VRING_ENABLE _IOW(VHOST_VIRTIO, 0x15, struct vhost_vring_state)
OK so first we need the kernel patches, then update the header, then
we can apply the qemu patch.
> /* The following ioctls use eventfd file descriptors to signal and poll
> * for events. */
> --
> 1.8.3.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
2020-12-01 17:43 ` Stefan Hajnoczi
@ 2020-12-02 10:35 ` Stefano Garzarella
0 siblings, 0 replies; 27+ messages in thread
From: Stefano Garzarella @ 2020-12-02 10:35 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: fam, linux-scsi, Michael S. Tsirkin, qemu-devel,
Linux Virtualization, target-devel, Paolo Bonzini, Mike Christie
On Tue, Dec 01, 2020 at 05:43:38PM +0000, Stefan Hajnoczi wrote:
>On Tue, Dec 01, 2020 at 02:45:18PM +0100, Stefano Garzarella wrote:
>> On Tue, Dec 01, 2020 at 12:59:43PM +0000, Stefan Hajnoczi wrote:
>> > On Fri, Nov 20, 2020 at 07:31:08AM -0500, Michael S. Tsirkin wrote:
>> > > On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote:
>> > > > On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>> > > > >
>> > > > > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie
>> > > > > <michael.christie@oracle.com> wrote:
>> > > > > >
>> > > > > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote:
>> > > > > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie
>> > > > > > > <michael.christie@oracle.com> wrote:
>> > > > > > >>
>> > > > > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote:
>> > > > > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote:
>> > > > > > > struct vhost_run_worker_info {
>> > > > > > > struct timespec *timeout;
>> > > > > > > sigset_t *sigmask;
>> > > > > > >
>> > > > > > > /* List of virtqueues to process */
>> > > > > > > unsigned nvqs;
>> > > > > > > unsigned vqs[];
>> > > > > > > };
>> > > > > > >
>> > > > > > > /* This blocks until the timeout is reached, a signal is received, or
>> > > > > > > the vhost device is destroyed */
>> > > > > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info);
>> > > > > > >
>> > > > > > > As you can see, userspace isn't involved with dealing with the
>> > > > > > > requests. It just acts as a thread donor to the vhost driver.
>> > > > > > >
>> > > > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the
>> > > > > > > penalty of switching into the kernel, copying in the arguments, etc.
>> > > > > >
>> > > > > > I didn't get this part. Why have the timeout? When the timeout expires,
>> > > > > > does userspace just call right back down to the kernel or does it do
>> > > > > > some sort of processing/operation?
>> > > > > >
>> > > > > > You could have your worker function run from that ioctl wait for a
>> > > > > > signal or a wake up call from the vhost_work/poll functions.
>> > > > >
>> > > > > An optional timeout argument is common in blocking interfaces like
>> > > > > poll(2), recvmmsg(2), etc.
>> > > > >
>> > > > > Although something can send a signal to the thread instead,
>> > > > > implementing that in an application is more awkward than passing a
>> > > > > struct timespec.
>> > > > >
>> > > > > Compared to other blocking calls we don't expect
>> > > > > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will
>> > > > > rarely be used and can be dropped from the interface.
>> > > > >
>> > > > > BTW the code I posted wasn't a carefully thought out proposal
>> > > > > :). The
>> > > > > details still need to be considered and I'm going to be offline for
>> > > > > the next week so maybe someone else can think it through in the
>> > > > > meantime.
>> > > >
>> > > > One final thought before I'm offline for a week. If
>> > > > ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance
>> > > > then it's hard to support poll-mode (busy waiting) workers because
>> > > > each device instance consumes a whole CPU. If we stick to an interface
>> > > > where the kernel manages the worker threads then it's easier to
>> > > > share
>> > > > workers between devices for polling.
>> > >
>> > >
>> > > Yes that is the reason vhost did its own reason in the first place.
>> > >
>> > >
>> > > I am vaguely thinking about poll(2) or a similar interface,
>> > > which can wait for an event on multiple FDs.
>> >
>> > I can imagine how using poll(2) would work from a userspace perspective,
>> > but on the kernel side I don't think it can be implemented cleanly.
>> > poll(2) is tied to the file_operations->poll() callback and
>> > read/write/error events. Not to mention there isn't a way to substitue
>> > the vhost worker thread function instead of scheduling out the current
>> > thread while waiting for poll fd events.
>> >
>> > But maybe ioctl(VHOST_WORKER_RUN) can do it:
>> >
>> > struct vhost_run_worker_dev {
>> > int vhostfd; /* /dev/vhost-TYPE fd */
>> > unsigned nvqs; /* number of virtqueues in vqs[] */
>> > unsigned vqs[]; /* virtqueues to process */
>> > };
>> >
>> > struct vhost_run_worker_info {
>> > struct timespec *timeout;
>> > sigset_t *sigmask;
>> >
>> > unsigned ndevices;
>> > struct vhost_run_worker_dev *devices[];
>> > };
>> >
>> > In the simple case userspace sets ndevices to 1 and we just handle
>> > virtqueues for the current device.
>> >
>> > In the fancier shared worker thread case the userspace process has the
>> > vhost fds of all the devices it is processing and passes them to
>> > ioctl(VHOST_WORKER_RUN) via struct vhost_run_worker_dev elements.
>>
>> Which fd will be used for this IOCTL? One of the 'vhostfd' or we should
>> create a new /dev/vhost-workers (or something similar)?
>>
>> Maybe the new device will be cleaner and can be reused also for other stuff
>> (I'm thinking about vDPA software devices).
>>
>> >
>> > From a security perspective it means the userspace thread has access to
>> > all vhost devices (because it has their fds).
>> >
>> > I'm not sure how the mm is supposed to work. The devices might be
>> > associated with different userspace processes (guests) and therefore
>> > have different virtual memory.
>>
>> Maybe in this case we should do something similar to io_uring SQPOLL kthread
>> where kthread_use_mm()/kthread_unuse_mm() is used to switch virtual memory
>> spaces.
>>
>> After writing, I saw that we already do it this in the vhost_worker() in
>> drivers/vhost/vhost.c
>>
>> >
>> > Just wanted to push this discussion along a little further. I'm buried
>> > under emails and probably wont be very active over the next few days.
>> >
>>
>> I think ioctl(VHOST_WORKER_RUN) might be the right way and also maybe the
>> least difficult one.
>
>Sending an ioctl API proposal email could help progress this
>discussion.
>
>Interesting questions:
>1. How to specify which virtqueues to process (Mike's use case)?
>2. How to process multiple devices?
>
Okay, I'll try to prepare a tentative proposal next week with that
questions in mind :-)
Thanks,
Stefano
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2020-12-02 10:35 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1605223150-10888-1-git-send-email-michael.christie@oracle.com>
[not found] ` <1605223150-10888-3-git-send-email-michael.christie@oracle.com>
2020-11-17 13:04 ` [PATCH 01/10] vhost: remove work arg from vhost_work_flush Stefan Hajnoczi
[not found] ` <1605223150-10888-4-git-send-email-michael.christie@oracle.com>
2020-11-17 13:07 ` [PATCH 02/10] vhost scsi: remove extra flushes Stefan Hajnoczi
[not found] ` <1605223150-10888-5-git-send-email-michael.christie@oracle.com>
2020-11-17 13:07 ` [PATCH 03/10] vhost poll: fix coding style Stefan Hajnoczi
[not found] ` <1605223150-10888-7-git-send-email-michael.christie@oracle.com>
2020-11-17 15:32 ` [PATCH 05/10] vhost: poll support support multiple workers Stefan Hajnoczi
[not found] ` <1605223150-10888-8-git-send-email-michael.christie@oracle.com>
2020-11-17 16:04 ` [PATCH 06/10] vhost scsi: make SCSI cmd completion per vq Stefan Hajnoczi
[not found] ` <1605223150-10888-9-git-send-email-michael.christie@oracle.com>
2020-11-17 16:05 ` [PATCH 07/10] vhost, vhost-scsi: flush IO vqs then send TMF rsp Stefan Hajnoczi
[not found] ` <1605223150-10888-10-git-send-email-michael.christie@oracle.com>
2020-11-17 16:08 ` [PATCH 08/10] vhost: move msg_handler to new ops struct Stefan Hajnoczi
[not found] ` <1605223150-10888-11-git-send-email-michael.christie@oracle.com>
2020-11-17 16:14 ` [PATCH 09/10] vhost: add VHOST_SET_VRING_ENABLE support Stefan Hajnoczi
2020-11-17 16:40 ` [PATCH 00/10] vhost/qemu: thread per IO SCSI vq Stefan Hajnoczi
2020-11-18 5:17 ` Jason Wang
[not found] ` <8318de9f-c585-39f8-d931-1ff5e0341d75@oracle.com>
2020-11-18 7:54 ` Jason Wang
[not found] ` <d6ffcf17-ab12-4830-cc3c-0f0402fb8a0f@oracle.com>
2020-11-19 4:35 ` Jason Wang
[not found] ` <b3343762-bb11-b750-46ec-43b5556f2b8e@oracle.com>
2020-11-18 9:54 ` Michael S. Tsirkin
2020-11-19 14:00 ` Stefan Hajnoczi
2020-11-18 11:31 ` Stefan Hajnoczi
2020-11-19 14:46 ` Michael S. Tsirkin
[not found] ` <ceebdc90-3ffc-1563-ff85-12a848bcba18@oracle.com>
2020-11-19 16:24 ` Stefan Hajnoczi
[not found] ` <ffd88f0c-981e-a102-4b08-f29d6b9a0f71@oracle.com>
2020-11-19 17:08 ` Stefan Hajnoczi
2020-11-20 8:45 ` Stefan Hajnoczi
2020-11-20 12:31 ` Michael S. Tsirkin
2020-12-01 12:59 ` Stefan Hajnoczi
2020-12-01 13:45 ` Stefano Garzarella
2020-12-01 17:43 ` Stefan Hajnoczi
2020-12-02 10:35 ` Stefano Garzarella
2020-11-23 15:17 ` Stefano Garzarella
[not found] ` <1605223150-10888-2-git-send-email-michael.christie@oracle.com>
2020-11-17 11:53 ` [PATCH 1/1] qemu vhost scsi: add VHOST_SET_VRING_ENABLE support Stefan Hajnoczi
2020-12-02 9:59 ` Michael S. Tsirkin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).