* Re: [PATCH 04/17] vhost: prep vhost_dev_init users to handle failures
[not found] ` <1603326903-27052-5-git-send-email-michael.christie@oracle.com>
@ 2020-10-22 5:22 ` kernel test robot
2020-11-02 5:57 ` Jason Wang
2020-11-03 10:04 ` Dan Carpenter
2 siblings, 0 replies; 13+ messages in thread
From: kernel test robot @ 2020-10-22 5:22 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
jasowang, pbonzini, stefanha, virtualization
Cc: clang-built-linux, kbuild-all
[-- Attachment #1: Type: text/plain, Size: 4184 bytes --]
Hi Mike,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on vhost/linux-next]
[also build test WARNING on v5.9 next-20201021]
[cannot apply to target/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Mike-Christie/vhost-fix-scsi-cmd-handling-and-cgroup-support/20201022-083844
base: https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git linux-next
config: x86_64-randconfig-a013-20201021 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project ee6abef5323d59b983129bf3514ef6775d1d6cd5)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install x86_64 cross compiling tool for clang build
# apt-get install binutils-x86-64-linux-gnu
# https://github.com/0day-ci/linux/commit/6e1629548d318c2c9af7490379a3c9d7e3cba0d5
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Mike-Christie/vhost-fix-scsi-cmd-handling-and-cgroup-support/20201022-083844
git checkout 6e1629548d318c2c9af7490379a3c9d7e3cba0d5
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
>> drivers/vhost/vsock.c:633:6: warning: variable 'ret' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/vhost/vsock.c:648:9: note: uninitialized use occurs here
return ret;
^~~
drivers/vhost/vsock.c:633:2: note: remove the 'if' if its condition is always false
if (vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/vhost/vsock.c:609:9: note: initialize the variable 'ret' to silence this warning
int ret;
^
= 0
1 warning generated.
vim +633 drivers/vhost/vsock.c
604
605 static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
606 {
607 struct vhost_virtqueue **vqs;
608 struct vhost_vsock *vsock;
609 int ret;
610
611 /* This struct is large and allocation could fail, fall back to vmalloc
612 * if there is no other way.
613 */
614 vsock = kvmalloc(sizeof(*vsock), GFP_KERNEL | __GFP_RETRY_MAYFAIL);
615 if (!vsock)
616 return -ENOMEM;
617
618 vqs = kmalloc_array(ARRAY_SIZE(vsock->vqs), sizeof(*vqs), GFP_KERNEL);
619 if (!vqs) {
620 ret = -ENOMEM;
621 goto out;
622 }
623
624 vsock->guest_cid = 0; /* no CID assigned yet */
625
626 atomic_set(&vsock->queued_replies, 0);
627
628 vqs[VSOCK_VQ_TX] = &vsock->vqs[VSOCK_VQ_TX];
629 vqs[VSOCK_VQ_RX] = &vsock->vqs[VSOCK_VQ_RX];
630 vsock->vqs[VSOCK_VQ_TX].handle_kick = vhost_vsock_handle_tx_kick;
631 vsock->vqs[VSOCK_VQ_RX].handle_kick = vhost_vsock_handle_rx_kick;
632
> 633 if (vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs),
634 UIO_MAXIOV, VHOST_VSOCK_PKT_WEIGHT,
635 VHOST_VSOCK_WEIGHT, true, NULL))
636 goto err_dev_init;
637
638 file->private_data = vsock;
639 spin_lock_init(&vsock->send_pkt_list_lock);
640 INIT_LIST_HEAD(&vsock->send_pkt_list);
641 vhost_work_init(&vsock->send_pkt_work, vhost_transport_send_pkt_work);
642 return 0;
643
644 err_dev_init:
645 kfree(vqs);
646 out:
647 vhost_vsock_free(vsock);
648 return ret;
649 }
650
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 30985 bytes --]
[-- Attachment #3: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 01/17] vhost scsi: add lun parser helper
[not found] ` <1603326903-27052-2-git-send-email-michael.christie@oracle.com>
@ 2020-10-26 3:33 ` Jason Wang
0 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2020-10-26 3:33 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
pbonzini, stefanha, virtualization
On 2020/10/22 上午8:34, Mike Christie wrote:
> Move code to parse lun from req's lun_buf to helper, so tmf code
> can use it in the next patch.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> drivers/vhost/scsi.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
Acked-by: Jason Wang <jasowang@redhat.com>
>
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index b22adf0..0ea78d0 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -907,6 +907,11 @@ static void vhost_scsi_submission_work(struct work_struct *work)
> return ret;
> }
>
> +static u16 vhost_buf_to_lun(u8 *lun_buf)
> +{
> + return ((lun_buf[2] << 8) | lun_buf[3]) & 0x3FFF;
> +}
> +
> static void
> vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
> {
> @@ -1045,12 +1050,12 @@ static void vhost_scsi_submission_work(struct work_struct *work)
> tag = vhost64_to_cpu(vq, v_req_pi.tag);
> task_attr = v_req_pi.task_attr;
> cdb = &v_req_pi.cdb[0];
> - lun = ((v_req_pi.lun[2] << 8) | v_req_pi.lun[3]) & 0x3FFF;
> + lun = vhost_buf_to_lun(v_req_pi.lun);
> } else {
> tag = vhost64_to_cpu(vq, v_req.tag);
> task_attr = v_req.task_attr;
> cdb = &v_req.cdb[0];
> - lun = ((v_req.lun[2] << 8) | v_req.lun[3]) & 0x3FFF;
> + lun = vhost_buf_to_lun(v_req.lun);
> }
> /*
> * Check that the received CDB size does not exceeded our
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 03/17] vhost net: use goto error handling in open
[not found] ` <1603326903-27052-4-git-send-email-michael.christie@oracle.com>
@ 2020-10-26 3:34 ` Jason Wang
0 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2020-10-26 3:34 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
pbonzini, stefanha, virtualization
On 2020/10/22 上午8:34, Mike Christie wrote:
> In the next patches vhost_dev_init will be able to fail. This patch has
> vhost_net_open use goto error handling like is done in the other vhost
> code to make handling vhost_dev_init failures easier to handle and
> extend in the future.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> drivers/vhost/net.c | 29 ++++++++++++++---------------
> 1 file changed, 14 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 531a00d..831d824 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -1286,27 +1286,18 @@ static int vhost_net_open(struct inode *inode, struct file *f)
> if (!n)
> return -ENOMEM;
> vqs = kmalloc_array(VHOST_NET_VQ_MAX, sizeof(*vqs), GFP_KERNEL);
> - if (!vqs) {
> - kvfree(n);
> - return -ENOMEM;
> - }
> + if (!vqs)
> + goto err_vqs;
>
> queue = kmalloc_array(VHOST_NET_BATCH, sizeof(void *),
> GFP_KERNEL);
> - if (!queue) {
> - kfree(vqs);
> - kvfree(n);
> - return -ENOMEM;
> - }
> + if (!queue)
> + goto err_queue;
> n->vqs[VHOST_NET_VQ_RX].rxq.queue = queue;
>
> xdp = kmalloc_array(VHOST_NET_BATCH, sizeof(*xdp), GFP_KERNEL);
> - if (!xdp) {
> - kfree(vqs);
> - kvfree(n);
> - kfree(queue);
> - return -ENOMEM;
> - }
> + if (!xdp)
> + goto err_xdp;
> n->vqs[VHOST_NET_VQ_TX].xdp = xdp;
>
> dev = &n->dev;
> @@ -1338,6 +1329,14 @@ static int vhost_net_open(struct inode *inode, struct file *f)
> n->refcnt_bias = 0;
>
> return 0;
> +
> +err_xdp:
> + kfree(queue);
> +err_queue:
> + kfree(vqs);
> +err_vqs:
> + kvfree(n);
> + return -ENOMEM;
> }
>
> static struct socket *vhost_net_stop_vq(struct vhost_net *n,
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 07/17] vhost scsi: support delayed IO vq creation
[not found] ` <1603326903-27052-8-git-send-email-michael.christie@oracle.com>
@ 2020-10-26 3:51 ` Jason Wang
[not found] ` <49c2fc29-348c-06db-4823-392f7476d318@oracle.com>
0 siblings, 1 reply; 13+ messages in thread
From: Jason Wang @ 2020-10-26 3:51 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
pbonzini, stefanha, virtualization
On 2020/10/22 上午8:34, Mike Christie wrote:
> Each vhost-scsi device will need a evt and ctl queue, but the number
> of IO queues depends on whatever the user has configured in userspace.
> This patch has vhost-scsi create the evt, ctl and one IO vq at device
> open time. We then create the other IO vqs when userspace starts to
> set them up. We still waste some mem on the vq and scsi vq structs,
> but we don't waste mem on iovec related arrays and for later patches
> we know which queues are used by the dev->nvqs value.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
> drivers/vhost/scsi.c | 19 +++++++++++++++----
> 1 file changed, 15 insertions(+), 4 deletions(-)
Not familiar with SCSI. But I wonder if it could behave like vhost-net.
E.g userspace should known the number of virtqueues so it can just open
and close multiple vhost-scsi file descriptors.
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 07/17] vhost scsi: support delayed IO vq creation
[not found] ` <49c2fc29-348c-06db-4823-392f7476d318@oracle.com>
@ 2020-10-28 1:55 ` Jason Wang
2020-10-30 8:47 ` Michael S. Tsirkin
1 sibling, 0 replies; 13+ messages in thread
From: Jason Wang @ 2020-10-28 1:55 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
pbonzini, stefanha, virtualization
On 2020/10/27 下午1:47, Mike Christie wrote:
> On 10/25/20 10:51 PM, Jason Wang wrote:
>>
>> On 2020/10/22 上午8:34, Mike Christie wrote:
>>> Each vhost-scsi device will need a evt and ctl queue, but the number
>>> of IO queues depends on whatever the user has configured in userspace.
>>> This patch has vhost-scsi create the evt, ctl and one IO vq at device
>>> open time. We then create the other IO vqs when userspace starts to
>>> set them up. We still waste some mem on the vq and scsi vq structs,
>>> but we don't waste mem on iovec related arrays and for later patches
>>> we know which queues are used by the dev->nvqs value.
>>>
>>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>>> ---
>>> drivers/vhost/scsi.c | 19 +++++++++++++++----
>>> 1 file changed, 15 insertions(+), 4 deletions(-)
>>
>>
>> Not familiar with SCSI. But I wonder if it could behave like vhost-net.
>>
>> E.g userspace should known the number of virtqueues so it can just
>> open and close multiple vhost-scsi file descriptors.
>>
>
> One hiccup I'm hitting is that we might end up creating about 3x more
> vqs than we need. The problem is that for scsi each vhost device has:
>
> vq=0: special control vq
> vq=1: event vq
> vq=2 and above: SCSI CMD/IO vqs. We want to create N of these.
>
> Today we do:
>
> Uerspace does open(/dev/vhost-scsi)
> vhost_dev_init(create 128 vqs and then later we setup and use
> N of them);
>
> Qemu does ioctl(VHOST_SET_OWNER)
> vhost_dev_set_owner()
>
> For N vqs userspace does:
> // virtqueue setup related ioctls
>
> Qemu does ioctl(VHOST_SCSI_SET_ENDPOINT)
> - match LIO/target port to vhost_dev
>
>
> So we could change that to:
>
> For N IO vqs userspace does
> open(/dev/vhost-scsi)
> vhost_dev_init(create IO, evt, and ctl);
>
> for N IO vqs Qemu does:
> ioctl(VHOST_SET_OWNER)
> vhost_dev_set_owner()
>
> for N IO vqs Qemu does:
> // virtqueue setup related ioctls
>
> for N IO vqs Qemu does:
> ioctl(VHOST_SCSI_SET_ENDPOINT)
> - match LIO/target port to vhost_dev and assemble the
> multiple vhost_dev device.
>
> The problem is that we have to setup some of the evt/ctl specific
> parts at open() time when vhost_dev_init does vhost_poll_init for
> example.
>
> - At open time, we don't know if this vhost_dev is going to be part of
> a multiple vhost_device device or a single one so we need to create at
> least 3 of them
> - If it is a multiple device we don't know if its the first device
> being created for the device or the N'th, so we don't know if the
> dev's vqs will be used for IO or ctls/evts, so we have to create all 3.
>
> When we get the first VHOST_SCSI_SET_ENDPOINT call for a new style
> multiple vhost_dev device, we can use that dev's evt/ctl vqs for
> events/controls requests. When we get the other
> VHOST_SCSI_SET_ENDPOINT calls for the multiple vhost_dev device then
> those dev's evt/ctl vqs will be ignored and we will only use their IO
> vqs. So we end up with a lot of extra vqs.
Right, so in this case we can use this patch to address this issue
probably. If evt/ctl vq is not used, we won't even create them.
>
>
> One other question/issue I have is that qemu can open the
> /dev/vhost-scsi device or it allows tools like libvirtd to open the
> device and pass in the fd to use.
It allows libvirt to open and pass fds to qemu. This is how multie-queue
virtio-net is done, libvirt is in charge of opening multiple file
descriptors and pass them to qemu.
> For the latter case, would we continue to have those tools pass in the
> leading fd, then have qemu do the other num_queues - 1
> open(/dev/vhost-scsi) calls? Or do these apps that pass in the fd need
> to know about all of the fds for some management reason?
Usually qemu is running without privilege. So it depends on the
management to open the device.
Note that I'm not object your proposal, just want to see if it could be
done via a more easy way. During the development if multiqueue
virito-net, something similar as you've done was proposed but we end up
with the multiple vhost-net fd model which keeps kernel code unchanged.
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 00/17 V3] vhost: fix scsi cmd handling and cgroup support
[not found] <1603326903-27052-1-git-send-email-michael.christie@oracle.com>
` (3 preceding siblings ...)
[not found] ` <1603326903-27052-8-git-send-email-michael.christie@oracle.com>
@ 2020-10-29 21:47 ` Michael S. Tsirkin
[not found] ` <1603326903-27052-10-git-send-email-michael.christie@oracle.com>
5 siblings, 0 replies; 13+ messages in thread
From: Michael S. Tsirkin @ 2020-10-29 21:47 UTC (permalink / raw)
To: Mike Christie
Cc: martin.petersen, linux-scsi, virtualization, target-devel,
stefanha, pbonzini
On Wed, Oct 21, 2020 at 07:34:46PM -0500, Mike Christie wrote:
> In-Reply-To:
>
> The following patches were made over Michael's vhost branch here:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost
>
> They fix a couple issues with vhost-scsi when we hit the 256 cmd limit
> that result in the guest getting IO errors, add LUN reset support so
> devices are not offlined during transient errors, allow us to manage
> vhost scsi IO with cgroups, and imrpove IOPs up to 2X.
>
> The following patches are a follow up to this post:
> https://patchwork.kernel.org/project/target-devel/cover/1600712588-9514-1-git-send-email-michael.christie@oracle.com/
> which originally was fixing how vhost-scsi handled cmds so we would
> not get IO errors when sending more than 256 cmds.
>
> In that patchset I needed to detect if a vq was in use and for this
> patch:
> https://patchwork.kernel.org/project/target-devel/patch/1600712588-9514-3-git-send-email-michael.christie@oracle.com/
> It was suggested to add support for VHOST_RING_ENABLE. While doing
> that though I hit a couple problems:
>
> 1. The patches moved how vhost-scsi allocated cmds from per lio
> session to per vhost vq. To support both VHOST_RING_ENABLE and
> where userspace didn't support it, I would have to keep around the
> old per session/device cmd allocator/completion and then also maintain
> the new code. Or, I would still have to use this patch
> patchwork.kernel.org/cover/11790763/ for the compat case so there
> adding the new ioctl would not help much.
>
> 2. For vhost-scsi I also wanted to prevent where we allocate iovecs
> for 128 vqs even though we normally use a couple. To do this, I needed
> something similar to #1, but the problem is that the VHOST_RING_ENABLE
> call would come too late.
>
> To try and balance #1 and #2, these patches just allow vhost-scsi
> to setup a vq when userspace starts to config it. This allows the
> driver to only fully setup (we still waste some memory to support older
> setups but do not have to preallocate everything like before) what
> is used plus I do not need to maintain 2 code paths.
OK, so could we get a patchset with just bugfixes for this release
please?
And features should go into next one ...
> V3:
> - fix compile errors
> - fix possible crash where cmd could be freed while adding it to
> completion list
> - fix issue where we added the worker thread to the blk cgroup but
> the blk IO was submitted by a driver workqueue.
>
> V2:
> - fix use before set cpu var errors
> - drop vhost_vq_is_setup
> - include patches to do a worker thread per scsi IO vq
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 07/17] vhost scsi: support delayed IO vq creation
[not found] ` <49c2fc29-348c-06db-4823-392f7476d318@oracle.com>
2020-10-28 1:55 ` Jason Wang
@ 2020-10-30 8:47 ` Michael S. Tsirkin
2020-11-02 6:36 ` Jason Wang
1 sibling, 1 reply; 13+ messages in thread
From: Michael S. Tsirkin @ 2020-10-30 8:47 UTC (permalink / raw)
To: Mike Christie
Cc: martin.petersen, linux-scsi, virtualization, target-devel,
stefanha, pbonzini
On Tue, Oct 27, 2020 at 12:47:34AM -0500, Mike Christie wrote:
> On 10/25/20 10:51 PM, Jason Wang wrote:
> >
> > On 2020/10/22 上午8:34, Mike Christie wrote:
> > > Each vhost-scsi device will need a evt and ctl queue, but the number
> > > of IO queues depends on whatever the user has configured in userspace.
> > > This patch has vhost-scsi create the evt, ctl and one IO vq at device
> > > open time. We then create the other IO vqs when userspace starts to
> > > set them up. We still waste some mem on the vq and scsi vq structs,
> > > but we don't waste mem on iovec related arrays and for later patches
> > > we know which queues are used by the dev->nvqs value.
> > >
> > > Signed-off-by: Mike Christie <michael.christie@oracle.com>
> > > ---
> > > drivers/vhost/scsi.c | 19 +++++++++++++++----
> > > 1 file changed, 15 insertions(+), 4 deletions(-)
> >
> >
> > Not familiar with SCSI. But I wonder if it could behave like vhost-net.
> >
> > E.g userspace should known the number of virtqueues so it can just open
> > and close multiple vhost-scsi file descriptors.
> >
>
> One hiccup I'm hitting is that we might end up creating about 3x more vqs
> than we need. The problem is that for scsi each vhost device has:
>
> vq=0: special control vq
> vq=1: event vq
> vq=2 and above: SCSI CMD/IO vqs. We want to create N of these.
>
> Today we do:
>
> Uerspace does open(/dev/vhost-scsi)
> vhost_dev_init(create 128 vqs and then later we setup and use N of
> them);
>
> Qemu does ioctl(VHOST_SET_OWNER)
> vhost_dev_set_owner()
>
> For N vqs userspace does:
> // virtqueue setup related ioctls
>
> Qemu does ioctl(VHOST_SCSI_SET_ENDPOINT)
> - match LIO/target port to vhost_dev
>
>
> So we could change that to:
>
> For N IO vqs userspace does
> open(/dev/vhost-scsi)
> vhost_dev_init(create IO, evt, and ctl);
>
> for N IO vqs Qemu does:
> ioctl(VHOST_SET_OWNER)
> vhost_dev_set_owner()
>
> for N IO vqs Qemu does:
> // virtqueue setup related ioctls
>
> for N IO vqs Qemu does:
> ioctl(VHOST_SCSI_SET_ENDPOINT)
> - match LIO/target port to vhost_dev and assemble the
> multiple vhost_dev device.
>
> The problem is that we have to setup some of the evt/ctl specific parts at
> open() time when vhost_dev_init does vhost_poll_init for example.
>
> - At open time, we don't know if this vhost_dev is going to be part of a
> multiple vhost_device device or a single one so we need to create at least 3
> of them
> - If it is a multiple device we don't know if its the first device being
> created for the device or the N'th, so we don't know if the dev's vqs will
> be used for IO or ctls/evts, so we have to create all 3.
>
> When we get the first VHOST_SCSI_SET_ENDPOINT call for a new style multiple
> vhost_dev device, we can use that dev's evt/ctl vqs for events/controls
> requests. When we get the other VHOST_SCSI_SET_ENDPOINT calls for the
> multiple vhost_dev device then those dev's evt/ctl vqs will be ignored and
> we will only use their IO vqs. So we end up with a lot of extra vqs.
The issue Jason's hinting at is how can admins control the amount
of resources a given qemu instance can consume?
After all vhost vqs all live in host kernel memory ...
Limiting # of open fds would be one way to do that ...
The need to share event/control vqs between devices is a problem though,
and sending lots of ioctls on things like reset is also not that elegant.
Jason, did you have a good solution in mind?
> One other question/issue I have is that qemu can open the /dev/vhost-scsi
> device or it allows tools like libvirtd to open the device and pass in the
> fd to use. For the latter case, would we continue to have those tools pass
> in the leading fd, then have qemu do the other num_queues - 1
> open(/dev/vhost-scsi) calls? Or do these apps that pass in the fd need to
> know about all of the fds for some management reason?
They know about all the fds, for resource control and priveledge
separation reasons.
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 09/17] vhost scsi: fix cmd completion race
[not found] ` <1603326903-27052-10-git-send-email-michael.christie@oracle.com>
@ 2020-10-30 8:51 ` Michael S. Tsirkin
2020-10-30 16:04 ` Paolo Bonzini
0 siblings, 1 reply; 13+ messages in thread
From: Michael S. Tsirkin @ 2020-10-30 8:51 UTC (permalink / raw)
To: Mike Christie
Cc: martin.petersen, linux-scsi, virtualization, target-devel,
stefanha, pbonzini
On Wed, Oct 21, 2020 at 07:34:55PM -0500, Mike Christie wrote:
> We might not do the final se_cmd put from vhost_scsi_complete_cmd_work.
> When the last put happens a little later then we could race where
> vhost_scsi_complete_cmd_work does vhost_signal, the guest runs and sends
> more IO, and vhost_scsi_handle_vq runs but does not find any free cmds.
>
> This patch has us delay completing the cmd until the last lio core ref
> is dropped. We then know that once we signal to the guest that the cmd
> is completed that if it queues a new command it will find a free cmd.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
Paolo, could you review this one?
> ---
> drivers/vhost/scsi.c | 42 +++++++++++++++---------------------------
> 1 file changed, 15 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index f6b9010..2fa48dd 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -322,7 +322,7 @@ static u32 vhost_scsi_tpg_get_inst_index(struct se_portal_group *se_tpg)
> return 1;
> }
>
> -static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
> +static void vhost_scsi_release_cmd_res(struct se_cmd *se_cmd)
> {
> struct vhost_scsi_cmd *tv_cmd = container_of(se_cmd,
> struct vhost_scsi_cmd, tvc_se_cmd);
> @@ -344,6 +344,16 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
> vhost_scsi_put_inflight(inflight);
> }
>
> +static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
> +{
> + struct vhost_scsi_cmd *cmd = container_of(se_cmd,
> + struct vhost_scsi_cmd, tvc_se_cmd);
> + struct vhost_scsi *vs = cmd->tvc_vhost;
> +
> + llist_add(&cmd->tvc_completion_list, &vs->vs_completion_list);
> + vhost_work_queue(&vs->dev, &vs->vs_completion_work);
> +}
> +
> static u32 vhost_scsi_sess_get_index(struct se_session *se_sess)
> {
> return 0;
> @@ -366,28 +376,15 @@ static int vhost_scsi_get_cmd_state(struct se_cmd *se_cmd)
> return 0;
> }
>
> -static void vhost_scsi_complete_cmd(struct vhost_scsi_cmd *cmd)
> -{
> - struct vhost_scsi *vs = cmd->tvc_vhost;
> -
> - llist_add(&cmd->tvc_completion_list, &vs->vs_completion_list);
> -
> - vhost_work_queue(&vs->dev, &vs->vs_completion_work);
> -}
> -
> static int vhost_scsi_queue_data_in(struct se_cmd *se_cmd)
> {
> - struct vhost_scsi_cmd *cmd = container_of(se_cmd,
> - struct vhost_scsi_cmd, tvc_se_cmd);
> - vhost_scsi_complete_cmd(cmd);
> + transport_generic_free_cmd(se_cmd, 0);
> return 0;
> }
>
> static int vhost_scsi_queue_status(struct se_cmd *se_cmd)
> {
> - struct vhost_scsi_cmd *cmd = container_of(se_cmd,
> - struct vhost_scsi_cmd, tvc_se_cmd);
> - vhost_scsi_complete_cmd(cmd);
> + transport_generic_free_cmd(se_cmd, 0);
> return 0;
> }
>
> @@ -433,15 +430,6 @@ static void vhost_scsi_free_evt(struct vhost_scsi *vs, struct vhost_scsi_evt *ev
> return evt;
> }
>
> -static void vhost_scsi_free_cmd(struct vhost_scsi_cmd *cmd)
> -{
> - struct se_cmd *se_cmd = &cmd->tvc_se_cmd;
> -
> - /* TODO locking against target/backend threads? */
> - transport_generic_free_cmd(se_cmd, 0);
> -
> -}
> -
> static int vhost_scsi_check_stop_free(struct se_cmd *se_cmd)
> {
> return target_put_sess_cmd(se_cmd);
> @@ -560,7 +548,7 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
> } else
> pr_err("Faulted on virtio_scsi_cmd_resp\n");
>
> - vhost_scsi_free_cmd(cmd);
> + vhost_scsi_release_cmd_res(se_cmd);
> }
>
> vq = -1;
> @@ -1096,7 +1084,7 @@ static u16 vhost_buf_to_lun(u8 *lun_buf)
> &prot_iter, exp_data_len,
> &data_iter))) {
> vq_err(vq, "Failed to map iov to sgl\n");
> - vhost_scsi_release_cmd(&cmd->tvc_se_cmd);
> + vhost_scsi_release_cmd_res(&cmd->tvc_se_cmd);
> goto err;
> }
> }
> --
> 1.8.3.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 09/17] vhost scsi: fix cmd completion race
2020-10-30 8:51 ` [PATCH 09/17] vhost scsi: fix cmd completion race Michael S. Tsirkin
@ 2020-10-30 16:04 ` Paolo Bonzini
0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2020-10-30 16:04 UTC (permalink / raw)
To: Michael S. Tsirkin, Mike Christie
Cc: martin.petersen, linux-scsi, virtualization, target-devel,
stefanha
On 30/10/20 09:51, Michael S. Tsirkin wrote:
> On Wed, Oct 21, 2020 at 07:34:55PM -0500, Mike Christie wrote:
>> We might not do the final se_cmd put from vhost_scsi_complete_cmd_work.
>> When the last put happens a little later then we could race where
>> vhost_scsi_complete_cmd_work does vhost_signal, the guest runs and sends
>> more IO, and vhost_scsi_handle_vq runs but does not find any free cmds.
>>
>> This patch has us delay completing the cmd until the last lio core ref
>> is dropped. We then know that once we signal to the guest that the cmd
>> is completed that if it queues a new command it will find a free cmd.
>>
>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>
> Paolo, could you review this one?
I don't know how LIO does all the callbacks, honestly (I have only ever
worked on the virtio-scsi driver, not vhost-scsi, and I have only ever
reviewed some virtio-scsi spec bits of vhost-scsi).
The vhost_scsi_complete_cmd_work parts look fine, but I have no idea why
vhost_scsi_queue_data_in and vhost_scsi_queue_status call.
Paolo
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 04/17] vhost: prep vhost_dev_init users to handle failures
[not found] ` <1603326903-27052-5-git-send-email-michael.christie@oracle.com>
2020-10-22 5:22 ` [PATCH 04/17] vhost: prep vhost_dev_init users to handle failures kernel test robot
@ 2020-11-02 5:57 ` Jason Wang
2020-11-03 10:04 ` Dan Carpenter
2 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2020-11-02 5:57 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
pbonzini, stefanha, virtualization
On 2020/10/22 上午8:34, Mike Christie wrote:
> This is just a prep patch to get vhost_dev_init callers ready to handle
> the next patch where the function can fail. In this patch vhost_dev_init
> just returns 0, but I think it's easier to check for goto/error handling
> errors separated from the next patch.
>
> Signed-off-by: Mike Christie<michael.christie@oracle.com>
> ---
> drivers/vhost/net.c | 11 +++++++----
> drivers/vhost/scsi.c | 7 +++++--
> drivers/vhost/test.c | 9 +++++++--
> drivers/vhost/vdpa.c | 7 +++++--
> drivers/vhost/vhost.c | 14 ++++++++------
> drivers/vhost/vhost.h | 10 +++++-----
> drivers/vhost/vsock.c | 9 ++++++---
> 7 files changed, 43 insertions(+), 24 deletions(-)
Acked-by: Jason Wang <jasowang@redhat.com>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 07/17] vhost scsi: support delayed IO vq creation
2020-10-30 8:47 ` Michael S. Tsirkin
@ 2020-11-02 6:36 ` Jason Wang
2020-11-02 6:49 ` Jason Wang
0 siblings, 1 reply; 13+ messages in thread
From: Jason Wang @ 2020-11-02 6:36 UTC (permalink / raw)
To: Michael S. Tsirkin, Mike Christie
Cc: martin.petersen, linux-scsi, virtualization, target-devel,
stefanha, pbonzini
On 2020/10/30 下午4:47, Michael S. Tsirkin wrote:
> On Tue, Oct 27, 2020 at 12:47:34AM -0500, Mike Christie wrote:
>> On 10/25/20 10:51 PM, Jason Wang wrote:
>>> On 2020/10/22 上午8:34, Mike Christie wrote:
>>>> Each vhost-scsi device will need a evt and ctl queue, but the number
>>>> of IO queues depends on whatever the user has configured in userspace.
>>>> This patch has vhost-scsi create the evt, ctl and one IO vq at device
>>>> open time. We then create the other IO vqs when userspace starts to
>>>> set them up. We still waste some mem on the vq and scsi vq structs,
>>>> but we don't waste mem on iovec related arrays and for later patches
>>>> we know which queues are used by the dev->nvqs value.
>>>>
>>>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>>>> ---
>>>> drivers/vhost/scsi.c | 19 +++++++++++++++----
>>>> 1 file changed, 15 insertions(+), 4 deletions(-)
>>>
>>> Not familiar with SCSI. But I wonder if it could behave like vhost-net.
>>>
>>> E.g userspace should known the number of virtqueues so it can just open
>>> and close multiple vhost-scsi file descriptors.
>>>
>> One hiccup I'm hitting is that we might end up creating about 3x more vqs
>> than we need. The problem is that for scsi each vhost device has:
>>
>> vq=0: special control vq
>> vq=1: event vq
>> vq=2 and above: SCSI CMD/IO vqs. We want to create N of these.
>>
>> Today we do:
>>
>> Uerspace does open(/dev/vhost-scsi)
>> vhost_dev_init(create 128 vqs and then later we setup and use N of
>> them);
>>
>> Qemu does ioctl(VHOST_SET_OWNER)
>> vhost_dev_set_owner()
>>
>> For N vqs userspace does:
>> // virtqueue setup related ioctls
>>
>> Qemu does ioctl(VHOST_SCSI_SET_ENDPOINT)
>> - match LIO/target port to vhost_dev
>>
>>
>> So we could change that to:
>>
>> For N IO vqs userspace does
>> open(/dev/vhost-scsi)
>> vhost_dev_init(create IO, evt, and ctl);
>>
>> for N IO vqs Qemu does:
>> ioctl(VHOST_SET_OWNER)
>> vhost_dev_set_owner()
>>
>> for N IO vqs Qemu does:
>> // virtqueue setup related ioctls
>>
>> for N IO vqs Qemu does:
>> ioctl(VHOST_SCSI_SET_ENDPOINT)
>> - match LIO/target port to vhost_dev and assemble the
>> multiple vhost_dev device.
>>
>> The problem is that we have to setup some of the evt/ctl specific parts at
>> open() time when vhost_dev_init does vhost_poll_init for example.
>>
>> - At open time, we don't know if this vhost_dev is going to be part of a
>> multiple vhost_device device or a single one so we need to create at least 3
>> of them
>> - If it is a multiple device we don't know if its the first device being
>> created for the device or the N'th, so we don't know if the dev's vqs will
>> be used for IO or ctls/evts, so we have to create all 3.
>>
>> When we get the first VHOST_SCSI_SET_ENDPOINT call for a new style multiple
>> vhost_dev device, we can use that dev's evt/ctl vqs for events/controls
>> requests. When we get the other VHOST_SCSI_SET_ENDPOINT calls for the
>> multiple vhost_dev device then those dev's evt/ctl vqs will be ignored and
>> we will only use their IO vqs. So we end up with a lot of extra vqs.
> The issue Jason's hinting at is how can admins control the amount
> of resources a given qemu instance can consume?
> After all vhost vqs all live in host kernel memory ...
> Limiting # of open fds would be one way to do that ...
>
> The need to share event/control vqs between devices is a problem though,
> and sending lots of ioctls on things like reset is also not that elegant.
> Jason, did you have a good solution in mind?
Nope, I'm not familiar with SCSI so I don't even know sharing evt/cvq is
possible. Consider VHOST_SCSI_MAX_VQ is already 128 per device. Mike's
proposal seems to be better.
Thanks
>
>> One other question/issue I have is that qemu can open the /dev/vhost-scsi
>> device or it allows tools like libvirtd to open the device and pass in the
>> fd to use. For the latter case, would we continue to have those tools pass
>> in the leading fd, then have qemu do the other num_queues - 1
>> open(/dev/vhost-scsi) calls? Or do these apps that pass in the fd need to
>> know about all of the fds for some management reason?
> They know about all the fds, for resource control and priveledge
> separation reasons.
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 07/17] vhost scsi: support delayed IO vq creation
2020-11-02 6:36 ` Jason Wang
@ 2020-11-02 6:49 ` Jason Wang
0 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2020-11-02 6:49 UTC (permalink / raw)
To: Michael S. Tsirkin, Mike Christie
Cc: martin.petersen, linux-scsi, virtualization, target-devel,
stefanha, pbonzini
On 2020/11/2 下午2:36, Jason Wang wrote:
>>
>> The need to share event/control vqs between devices is a problem though,
>> and sending lots of ioctls on things like reset is also not that
>> elegant.
>> Jason, did you have a good solution in mind?
>
>
> Nope, I'm not familiar with SCSI so I don't even know sharing evt/cvq
> is possible. Consider VHOST_SCSI_MAX_VQ is already 128 per device.
> Mike's proposal seems to be better.
>
> Thanks
Btw, it looks to me vhost_scsi_do_evt_work() has the assumption of iovec
layout which needs to be fixed.
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 04/17] vhost: prep vhost_dev_init users to handle failures
[not found] ` <1603326903-27052-5-git-send-email-michael.christie@oracle.com>
2020-10-22 5:22 ` [PATCH 04/17] vhost: prep vhost_dev_init users to handle failures kernel test robot
2020-11-02 5:57 ` Jason Wang
@ 2020-11-03 10:04 ` Dan Carpenter
2 siblings, 0 replies; 13+ messages in thread
From: Dan Carpenter @ 2020-11-03 10:04 UTC (permalink / raw)
To: kbuild, Mike Christie, martin.petersen, linux-scsi, target-devel,
mst, jasowang, pbonzini, stefanha, virtualization
Cc: kbuild-all, lkp, Dan Carpenter
[-- Attachment #1: Type: text/plain, Size: 4277 bytes --]
Hi Mike,
url: https://github.com/0day-ci/linux/commits/Mike-Christie/vhost-fix-scsi-cmd-handling-and-cgroup-support/20201022-083844
base: https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git linux-next
config: i386-randconfig-m021-20201101 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
smatch warnings:
drivers/vhost/vsock.c:648 vhost_vsock_dev_open() error: uninitialized symbol 'ret'.
vim +/ret +648 drivers/vhost/vsock.c
433fc58e6bf2c8b Asias He 2016-07-28 605 static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
433fc58e6bf2c8b Asias He 2016-07-28 606 {
433fc58e6bf2c8b Asias He 2016-07-28 607 struct vhost_virtqueue **vqs;
433fc58e6bf2c8b Asias He 2016-07-28 608 struct vhost_vsock *vsock;
433fc58e6bf2c8b Asias He 2016-07-28 609 int ret;
433fc58e6bf2c8b Asias He 2016-07-28 610
433fc58e6bf2c8b Asias He 2016-07-28 611 /* This struct is large and allocation could fail, fall back to vmalloc
433fc58e6bf2c8b Asias He 2016-07-28 612 * if there is no other way.
433fc58e6bf2c8b Asias He 2016-07-28 613 */
dcda9b04713c3f6 Michal Hocko 2017-07-12 614 vsock = kvmalloc(sizeof(*vsock), GFP_KERNEL | __GFP_RETRY_MAYFAIL);
433fc58e6bf2c8b Asias He 2016-07-28 615 if (!vsock)
433fc58e6bf2c8b Asias He 2016-07-28 616 return -ENOMEM;
433fc58e6bf2c8b Asias He 2016-07-28 617
433fc58e6bf2c8b Asias He 2016-07-28 618 vqs = kmalloc_array(ARRAY_SIZE(vsock->vqs), sizeof(*vqs), GFP_KERNEL);
433fc58e6bf2c8b Asias He 2016-07-28 619 if (!vqs) {
433fc58e6bf2c8b Asias He 2016-07-28 620 ret = -ENOMEM;
433fc58e6bf2c8b Asias He 2016-07-28 621 goto out;
433fc58e6bf2c8b Asias He 2016-07-28 622 }
433fc58e6bf2c8b Asias He 2016-07-28 623
a72b69dc083a931 Stefan Hajnoczi 2017-11-09 624 vsock->guest_cid = 0; /* no CID assigned yet */
a72b69dc083a931 Stefan Hajnoczi 2017-11-09 625
433fc58e6bf2c8b Asias He 2016-07-28 626 atomic_set(&vsock->queued_replies, 0);
433fc58e6bf2c8b Asias He 2016-07-28 627
433fc58e6bf2c8b Asias He 2016-07-28 628 vqs[VSOCK_VQ_TX] = &vsock->vqs[VSOCK_VQ_TX];
433fc58e6bf2c8b Asias He 2016-07-28 629 vqs[VSOCK_VQ_RX] = &vsock->vqs[VSOCK_VQ_RX];
433fc58e6bf2c8b Asias He 2016-07-28 630 vsock->vqs[VSOCK_VQ_TX].handle_kick = vhost_vsock_handle_tx_kick;
433fc58e6bf2c8b Asias He 2016-07-28 631 vsock->vqs[VSOCK_VQ_RX].handle_kick = vhost_vsock_handle_rx_kick;
433fc58e6bf2c8b Asias He 2016-07-28 632
6e1629548d318c2 Mike Christie 2020-10-21 633 if (vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs),
e82b9b0727ff6d6 Jason Wang 2019-05-17 634 UIO_MAXIOV, VHOST_VSOCK_PKT_WEIGHT,
6e1629548d318c2 Mike Christie 2020-10-21 635 VHOST_VSOCK_WEIGHT, true, NULL))
6e1629548d318c2 Mike Christie 2020-10-21 636 goto err_dev_init;
^^^^^^^^^^^^^^^^^
"ret" needs to be set here.
433fc58e6bf2c8b Asias He 2016-07-28 637
433fc58e6bf2c8b Asias He 2016-07-28 638 file->private_data = vsock;
433fc58e6bf2c8b Asias He 2016-07-28 639 spin_lock_init(&vsock->send_pkt_list_lock);
433fc58e6bf2c8b Asias He 2016-07-28 640 INIT_LIST_HEAD(&vsock->send_pkt_list);
433fc58e6bf2c8b Asias He 2016-07-28 641 vhost_work_init(&vsock->send_pkt_work, vhost_transport_send_pkt_work);
433fc58e6bf2c8b Asias He 2016-07-28 642 return 0;
433fc58e6bf2c8b Asias He 2016-07-28 643
6e1629548d318c2 Mike Christie 2020-10-21 644 err_dev_init:
6e1629548d318c2 Mike Christie 2020-10-21 645 kfree(vqs);
433fc58e6bf2c8b Asias He 2016-07-28 646 out:
433fc58e6bf2c8b Asias He 2016-07-28 647 vhost_vsock_free(vsock);
433fc58e6bf2c8b Asias He 2016-07-28 @648 return ret;
433fc58e6bf2c8b Asias He 2016-07-28 649 }
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 32485 bytes --]
[-- Attachment #3: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2020-11-03 10:04 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1603326903-27052-1-git-send-email-michael.christie@oracle.com>
[not found] ` <1603326903-27052-5-git-send-email-michael.christie@oracle.com>
2020-10-22 5:22 ` [PATCH 04/17] vhost: prep vhost_dev_init users to handle failures kernel test robot
2020-11-02 5:57 ` Jason Wang
2020-11-03 10:04 ` Dan Carpenter
[not found] ` <1603326903-27052-2-git-send-email-michael.christie@oracle.com>
2020-10-26 3:33 ` [PATCH 01/17] vhost scsi: add lun parser helper Jason Wang
[not found] ` <1603326903-27052-4-git-send-email-michael.christie@oracle.com>
2020-10-26 3:34 ` [PATCH 03/17] vhost net: use goto error handling in open Jason Wang
[not found] ` <1603326903-27052-8-git-send-email-michael.christie@oracle.com>
2020-10-26 3:51 ` [PATCH 07/17] vhost scsi: support delayed IO vq creation Jason Wang
[not found] ` <49c2fc29-348c-06db-4823-392f7476d318@oracle.com>
2020-10-28 1:55 ` Jason Wang
2020-10-30 8:47 ` Michael S. Tsirkin
2020-11-02 6:36 ` Jason Wang
2020-11-02 6:49 ` Jason Wang
2020-10-29 21:47 ` [PATCH 00/17 V3] vhost: fix scsi cmd handling and cgroup support Michael S. Tsirkin
[not found] ` <1603326903-27052-10-git-send-email-michael.christie@oracle.com>
2020-10-30 8:51 ` [PATCH 09/17] vhost scsi: fix cmd completion race Michael S. Tsirkin
2020-10-30 16:04 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).