* Re: [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session
[not found] ` <1600712588-9514-4-git-send-email-michael.christie@oracle.com>
@ 2020-09-22 1:32 ` kernel test robot
2020-09-23 7:53 ` Dan Carpenter
2020-09-24 6:22 ` Michael S. Tsirkin
2 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2020-09-22 1:32 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
jasowang, pbonzini, stefanha, virtualization
Cc: clang-built-linux, kbuild-all
[-- Attachment #1: Type: text/plain, Size: 13984 bytes --]
Hi Mike,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on vhost/linux-next]
[also build test WARNING on v5.9-rc6 next-20200921]
[cannot apply to target/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Mike-Christie/vhost-scsi-fixes-and-cleanups/20200922-031251
base: https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git linux-next
config: s390-randconfig-r022-20200920 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 4e8c028158b56d9c2142a62464e8e0686bde3584)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install s390 cross compiling tool for clang build
# apt-get install binutils-s390x-linux-gnu
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=s390
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
include/asm-generic/io.h:490:45: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
val = __le32_to_cpu(__raw_readl(PCI_IOBASE + addr));
~~~~~~~~~~ ^
include/uapi/linux/byteorder/big_endian.h:34:59: note: expanded from macro '__le32_to_cpu'
#define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
^
include/uapi/linux/swab.h:119:21: note: expanded from macro '__swab32'
___constant_swab32(x) : \
^
include/uapi/linux/swab.h:20:12: note: expanded from macro '___constant_swab32'
(((__u32)(x) & (__u32)0x0000ff00UL) << 8) | \
^
In file included from drivers/vhost/scsi.c:45:
In file included from include/uapi/linux/vhost.h:14:
In file included from include/uapi/linux/vhost_types.h:16:
In file included from include/linux/virtio_config.h:7:
In file included from include/linux/virtio.h:7:
In file included from include/linux/scatterlist.h:9:
In file included from arch/s390/include/asm/io.h:72:
include/asm-generic/io.h:490:45: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
val = __le32_to_cpu(__raw_readl(PCI_IOBASE + addr));
~~~~~~~~~~ ^
include/uapi/linux/byteorder/big_endian.h:34:59: note: expanded from macro '__le32_to_cpu'
#define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
^
include/uapi/linux/swab.h:119:21: note: expanded from macro '__swab32'
___constant_swab32(x) : \
^
include/uapi/linux/swab.h:21:12: note: expanded from macro '___constant_swab32'
(((__u32)(x) & (__u32)0x00ff0000UL) >> 8) | \
^
In file included from drivers/vhost/scsi.c:45:
In file included from include/uapi/linux/vhost.h:14:
In file included from include/uapi/linux/vhost_types.h:16:
In file included from include/linux/virtio_config.h:7:
In file included from include/linux/virtio.h:7:
In file included from include/linux/scatterlist.h:9:
In file included from arch/s390/include/asm/io.h:72:
include/asm-generic/io.h:490:45: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
val = __le32_to_cpu(__raw_readl(PCI_IOBASE + addr));
~~~~~~~~~~ ^
include/uapi/linux/byteorder/big_endian.h:34:59: note: expanded from macro '__le32_to_cpu'
#define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
^
include/uapi/linux/swab.h:119:21: note: expanded from macro '__swab32'
___constant_swab32(x) : \
^
include/uapi/linux/swab.h:22:12: note: expanded from macro '___constant_swab32'
(((__u32)(x) & (__u32)0xff000000UL) >> 24)))
^
In file included from drivers/vhost/scsi.c:45:
In file included from include/uapi/linux/vhost.h:14:
In file included from include/uapi/linux/vhost_types.h:16:
In file included from include/linux/virtio_config.h:7:
In file included from include/linux/virtio.h:7:
In file included from include/linux/scatterlist.h:9:
In file included from arch/s390/include/asm/io.h:72:
include/asm-generic/io.h:490:45: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
val = __le32_to_cpu(__raw_readl(PCI_IOBASE + addr));
~~~~~~~~~~ ^
include/uapi/linux/byteorder/big_endian.h:34:59: note: expanded from macro '__le32_to_cpu'
#define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
^
include/uapi/linux/swab.h:120:12: note: expanded from macro '__swab32'
__fswab32(x))
^
In file included from drivers/vhost/scsi.c:45:
In file included from include/uapi/linux/vhost.h:14:
In file included from include/uapi/linux/vhost_types.h:16:
In file included from include/linux/virtio_config.h:7:
In file included from include/linux/virtio.h:7:
In file included from include/linux/scatterlist.h:9:
In file included from arch/s390/include/asm/io.h:72:
include/asm-generic/io.h:501:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
__raw_writeb(value, PCI_IOBASE + addr);
~~~~~~~~~~ ^
include/asm-generic/io.h:511:46: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
__raw_writew(cpu_to_le16(value), PCI_IOBASE + addr);
~~~~~~~~~~ ^
include/asm-generic/io.h:521:46: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
__raw_writel(cpu_to_le32(value), PCI_IOBASE + addr);
~~~~~~~~~~ ^
include/asm-generic/io.h:609:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
readsb(PCI_IOBASE + addr, buffer, count);
~~~~~~~~~~ ^
include/asm-generic/io.h:617:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
readsw(PCI_IOBASE + addr, buffer, count);
~~~~~~~~~~ ^
include/asm-generic/io.h:625:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
readsl(PCI_IOBASE + addr, buffer, count);
~~~~~~~~~~ ^
include/asm-generic/io.h:634:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
writesb(PCI_IOBASE + addr, buffer, count);
~~~~~~~~~~ ^
include/asm-generic/io.h:643:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
writesw(PCI_IOBASE + addr, buffer, count);
~~~~~~~~~~ ^
include/asm-generic/io.h:652:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
writesl(PCI_IOBASE + addr, buffer, count);
~~~~~~~~~~ ^
>> drivers/vhost/scsi.c:606:28: warning: variable 'cpu' is uninitialized when used here [-Wuninitialized]
cmd->tvc_se_cmd.map_cpu = cpu;
^~~
drivers/vhost/scsi.c:583:14: note: initialize the variable 'cpu' to silence this warning
int tag, cpu;
^
= 0
21 warnings generated.
# https://github.com/0day-ci/linux/commit/aef0e1e9298ab68f2d7bdf1afb9a376641b993d5
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Mike-Christie/vhost-scsi-fixes-and-cleanups/20200922-031251
git checkout aef0e1e9298ab68f2d7bdf1afb9a376641b993d5
vim +/cpu +606 drivers/vhost/scsi.c
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 571
1a1ff8256af679 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 572 static struct vhost_scsi_cmd *
aef0e1e9298ab6 drivers/vhost/scsi.c Mike Christie 2020-09-21 573 vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
95e7c4341b8e28 drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 574 unsigned char *cdb, u64 scsi_tag, u16 lun, u8 task_attr,
95e7c4341b8e28 drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 575 u32 exp_data_len, int data_direction)
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 576 {
aef0e1e9298ab6 drivers/vhost/scsi.c Mike Christie 2020-09-21 577 struct vhost_scsi_virtqueue *svq = container_of(vq,
aef0e1e9298ab6 drivers/vhost/scsi.c Mike Christie 2020-09-21 578 struct vhost_scsi_virtqueue, vq);
1a1ff8256af679 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 579 struct vhost_scsi_cmd *cmd;
1a1ff8256af679 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 580 struct vhost_scsi_nexus *tv_nexus;
b1935f687bb93b drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 581 struct scatterlist *sg, *prot_sg;
3aee26b4ae9104 drivers/vhost/scsi.c Nicholas Bellinger 2013-06-21 582 struct page **pages;
10e9cbb6b53111 drivers/vhost/scsi.c Matthew Wilcox 2018-06-12 583 int tag, cpu;
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 584
9871831283e795 drivers/vhost/scsi.c Asias He 2013-05-06 585 tv_nexus = tpg->tpg_nexus;
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 586 if (!tv_nexus) {
1a1ff8256af679 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 587 pr_err("Unable to locate active struct vhost_scsi_nexus\n");
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 588 return ERR_PTR(-EIO);
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 589 }
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 590
aef0e1e9298ab6 drivers/vhost/scsi.c Mike Christie 2020-09-21 591 tag = sbitmap_get(&svq->scsi_tags, 0, false);
4a47d3a1ff10e5 drivers/vhost/scsi.c Nicholas Bellinger 2013-09-23 592 if (tag < 0) {
1a1ff8256af679 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 593 pr_err("Unable to obtain tag for vhost_scsi_cmd\n");
4a47d3a1ff10e5 drivers/vhost/scsi.c Nicholas Bellinger 2013-09-23 594 return ERR_PTR(-ENOMEM);
4a47d3a1ff10e5 drivers/vhost/scsi.c Nicholas Bellinger 2013-09-23 595 }
4a47d3a1ff10e5 drivers/vhost/scsi.c Nicholas Bellinger 2013-09-23 596
aef0e1e9298ab6 drivers/vhost/scsi.c Mike Christie 2020-09-21 597 cmd = &svq->scsi_cmds[tag];
3aee26b4ae9104 drivers/vhost/scsi.c Nicholas Bellinger 2013-06-21 598 sg = cmd->tvc_sgl;
b1935f687bb93b drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 599 prot_sg = cmd->tvc_prot_sgl;
3aee26b4ae9104 drivers/vhost/scsi.c Nicholas Bellinger 2013-06-21 600 pages = cmd->tvc_upages;
473f0b15a4c97d drivers/vhost/scsi.c Markus Elfring 2017-05-20 601 memset(cmd, 0, sizeof(*cmd));
3aee26b4ae9104 drivers/vhost/scsi.c Nicholas Bellinger 2013-06-21 602 cmd->tvc_sgl = sg;
b1935f687bb93b drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 603 cmd->tvc_prot_sgl = prot_sg;
3aee26b4ae9104 drivers/vhost/scsi.c Nicholas Bellinger 2013-06-21 604 cmd->tvc_upages = pages;
4824d3bfb9097a drivers/vhost/scsi.c Nicholas Bellinger 2013-06-07 605 cmd->tvc_se_cmd.map_tag = tag;
10e9cbb6b53111 drivers/vhost/scsi.c Matthew Wilcox 2018-06-12 @606 cmd->tvc_se_cmd.map_cpu = cpu;
95e7c4341b8e28 drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 607 cmd->tvc_tag = scsi_tag;
95e7c4341b8e28 drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 608 cmd->tvc_lun = lun;
95e7c4341b8e28 drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 609 cmd->tvc_task_attr = task_attr;
3c63f66a0dcdd6 drivers/vhost/scsi.c Asias He 2013-05-06 610 cmd->tvc_exp_data_len = exp_data_len;
3c63f66a0dcdd6 drivers/vhost/scsi.c Asias He 2013-05-06 611 cmd->tvc_data_direction = data_direction;
3c63f66a0dcdd6 drivers/vhost/scsi.c Asias He 2013-05-06 612 cmd->tvc_nexus = tv_nexus;
1a1ff8256af679 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 613 cmd->inflight = vhost_scsi_get_inflight(vq);
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 614
1a1ff8256af679 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 615 memcpy(cmd->tvc_cdb, cdb, VHOST_SCSI_MAX_CDB_SIZE);
95e7c4341b8e28 drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 616
3c63f66a0dcdd6 drivers/vhost/scsi.c Asias He 2013-05-06 617 return cmd;
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 618 }
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 619
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 26015 bytes --]
[-- Attachment #3: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/8] vhost vdpa: fix vhost_vdpa_open error handling
[not found] ` <1600712588-9514-2-git-send-email-michael.christie@oracle.com>
@ 2020-09-22 1:58 ` Jason Wang
0 siblings, 0 replies; 11+ messages in thread
From: Jason Wang @ 2020-09-22 1:58 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
pbonzini, stefanha, virtualization
On 2020/9/22 上午2:23, Mike Christie wrote:
> We must free the vqs array in the open failure path, because
> vhost_vdpa_release will not be called.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
> drivers/vhost/vdpa.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index 3fab94f..3301214 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -808,6 +808,7 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep)
>
> err_init_iotlb:
> vhost_dev_cleanup(&v->vdev);
> + kfree(vqs);
> err:
> atomic_dec(&v->opened);
> return r;
Acked-by: Jason Wang <jasowang@redhat.com>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 7/8] vhost: remove work arg from vhost_work_flush
[not found] ` <1600712588-9514-8-git-send-email-michael.christie@oracle.com>
@ 2020-09-22 2:01 ` Jason Wang
0 siblings, 0 replies; 11+ messages in thread
From: Jason Wang @ 2020-09-22 2:01 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
pbonzini, stefanha, virtualization
On 2020/9/22 上午2:23, Mike Christie wrote:
> vhost_work_flush doesn't do anything with the work arg. This patch drops
> it and then renames vhost_work_flush to vhost_work_dev_flush to reflect
> that the function flushes all the works in the dev and not just a
> specific queue or work item.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
Acked-by: Jason Wang <jasowang@redhat.com>
> ---
> drivers/vhost/scsi.c | 4 ++--
> drivers/vhost/vhost.c | 8 ++++----
> drivers/vhost/vhost.h | 2 +-
> drivers/vhost/vsock.c | 2 +-
> 4 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index 8791db8..5833059 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -1469,8 +1469,8 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
> /* Flush both the vhost poll and vhost work */
> for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
> vhost_scsi_flush_vq(vs, i);
> - vhost_work_flush(&vs->dev, &vs->vs_completion_work);
> - vhost_work_flush(&vs->dev, &vs->vs_event_work);
> + vhost_work_dev_flush(&vs->dev);
> + vhost_work_dev_flush(&vs->dev);
>
> /* Wait for all reqs issued before the flush to be finished */
> for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 5dd9eb1..f83674e 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -231,7 +231,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
> }
> EXPORT_SYMBOL_GPL(vhost_poll_stop);
>
> -void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> +void vhost_work_dev_flush(struct vhost_dev *dev)
> {
> struct vhost_flush_struct flush;
>
> @@ -243,13 +243,13 @@ void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> wait_for_completion(&flush.wait_event);
> }
> }
> -EXPORT_SYMBOL_GPL(vhost_work_flush);
> +EXPORT_SYMBOL_GPL(vhost_work_dev_flush);
>
> /* Flush any work that has been scheduled. When calling this, don't hold any
> * locks that are also used by the callback. */
> void vhost_poll_flush(struct vhost_poll *poll)
> {
> - vhost_work_flush(poll->dev, &poll->work);
> + vhost_work_dev_flush(poll->dev);
> }
> EXPORT_SYMBOL_GPL(vhost_poll_flush);
>
> @@ -542,7 +542,7 @@ static int vhost_attach_cgroups(struct vhost_dev *dev)
> attach.owner = current;
> vhost_work_init(&attach.work, vhost_attach_cgroups_work);
> vhost_work_queue(dev, &attach.work);
> - vhost_work_flush(dev, &attach.work);
> + vhost_work_dev_flush(dev);
> return attach.ret;
> }
>
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index 3d30b3d..b91efb5 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -46,7 +46,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
> void vhost_poll_stop(struct vhost_poll *poll);
> void vhost_poll_flush(struct vhost_poll *poll);
> void vhost_poll_queue(struct vhost_poll *poll);
> -void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work);
> +void vhost_work_dev_flush(struct vhost_dev *dev);
> long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp);
>
> struct vhost_log {
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index a483cec..f40205f 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -652,7 +652,7 @@ static void vhost_vsock_flush(struct vhost_vsock *vsock)
> for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++)
> if (vsock->vqs[i].handle_kick)
> vhost_poll_flush(&vsock->vqs[i].poll);
> - vhost_work_flush(&vsock->dev, &vsock->send_pkt_work);
> + vhost_work_dev_flush(&vsock->dev);
> }
>
> static void vhost_vsock_reset_orphans(struct sock *sk)
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/8] vhost: add helper to check if a vq has been setup
[not found] ` <1600712588-9514-3-git-send-email-michael.christie@oracle.com>
@ 2020-09-22 2:02 ` Jason Wang
[not found] ` <63094bae-1f26-c21e-9b3c-3a6aa99a7e24@oracle.com>
2020-09-22 2:45 ` Bart Van Assche
1 sibling, 1 reply; 11+ messages in thread
From: Jason Wang @ 2020-09-22 2:02 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
pbonzini, stefanha, virtualization
On 2020/9/22 上午2:23, Mike Christie wrote:
> This adds a helper check if a vq has been setup. The next patches
> will use this when we move the vhost scsi cmd preallocation from per
> session to per vq. In the per vq case, we only want to allocate cmds
> for vqs that have actually been setup and not for all the possible
> vqs.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
> drivers/vhost/vhost.c | 9 +++++++++
> drivers/vhost/vhost.h | 1 +
> 2 files changed, 10 insertions(+)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index b45519c..5dd9eb1 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -305,6 +305,15 @@ static void vhost_vring_call_reset(struct vhost_vring_call *call_ctx)
> spin_lock_init(&call_ctx->ctx_lock);
> }
>
> +bool vhost_vq_is_setup(struct vhost_virtqueue *vq)
> +{
> + if (vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq))
> + return true;
> + else
> + return false;
> +}
> +EXPORT_SYMBOL_GPL(vhost_vq_is_setup);
This is probably ok but I wonder maybe we should have something like
what vDPA did (VHOST_SET_VRING_ENABLE) to match virtio 1.0 device
definition.
Thanks
> +
> static void vhost_vq_reset(struct vhost_dev *dev,
> struct vhost_virtqueue *vq)
> {
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index 9032d3c..3d30b3d 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -190,6 +190,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *,
> struct vhost_log *log, unsigned int *log_num);
> void vhost_discard_vq_desc(struct vhost_virtqueue *, int n);
>
> +bool vhost_vq_is_setup(struct vhost_virtqueue *vq);
> int vhost_vq_init_access(struct vhost_virtqueue *);
> int vhost_add_used(struct vhost_virtqueue *, unsigned int head, int len);
> int vhost_add_used_n(struct vhost_virtqueue *, struct vring_used_elem *heads,
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/8] vhost: add helper to check if a vq has been setup
[not found] ` <1600712588-9514-3-git-send-email-michael.christie@oracle.com>
2020-09-22 2:02 ` [PATCH 2/8] vhost: add helper to check if a vq has been setup Jason Wang
@ 2020-09-22 2:45 ` Bart Van Assche
1 sibling, 0 replies; 11+ messages in thread
From: Bart Van Assche @ 2020-09-22 2:45 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
jasowang, pbonzini, stefanha, virtualization
On 2020-09-21 11:23, Mike Christie wrote:
> +bool vhost_vq_is_setup(struct vhost_virtqueue *vq)
> +{
> + if (vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq))
> + return true;
> + else
> + return false;
> +}
Has it been considered changing the body of this function into
"return vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq)"? I'm
concerned otherwise one or another build bot will suggest to make that
change.
Thanks,
Bart.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 4/8] vhost scsi: fix cmd completion race
[not found] ` <1600712588-9514-5-git-send-email-michael.christie@oracle.com>
@ 2020-09-22 2:48 ` Bart Van Assche
2020-09-22 2:51 ` Bart Van Assche
0 siblings, 1 reply; 11+ messages in thread
From: Bart Van Assche @ 2020-09-22 2:48 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
jasowang, pbonzini, stefanha, virtualization
On 2020-09-21 11:23, Mike Christie wrote:
> We might not do the final se_cmd put from vhost_scsi_complete_cmd_work.
> If the last put happens a little later then we could race where
> vhost_scsi_complete_cmd_work does vhost_signal, the guest runs and sends
> more IO, and vhost_scsi_handle_vq runs but does not find any free cmds.
>
> This patch has us delay completing the cmd until the last lio core ref
> is dropped. We then know that once we signal to the guest that the cmd
> is completed that if it queues a new command it will find a free cmd.
It seems weird to me to see a reference to LIO in the description of a
vhost patch? Since this driver supports more backends than LIO, shouldn't
the patch description be made more generic?
Thanks,
Bart.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 4/8] vhost scsi: fix cmd completion race
2020-09-22 2:48 ` [PATCH 4/8] vhost scsi: fix cmd completion race Bart Van Assche
@ 2020-09-22 2:51 ` Bart Van Assche
0 siblings, 0 replies; 11+ messages in thread
From: Bart Van Assche @ 2020-09-22 2:51 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
jasowang, pbonzini, stefanha, virtualization
On 2020-09-21 19:48, Bart Van Assche wrote:
> On 2020-09-21 11:23, Mike Christie wrote:
>> We might not do the final se_cmd put from vhost_scsi_complete_cmd_work.
>> If the last put happens a little later then we could race where
>> vhost_scsi_complete_cmd_work does vhost_signal, the guest runs and sends
>> more IO, and vhost_scsi_handle_vq runs but does not find any free cmds.
>>
>> This patch has us delay completing the cmd until the last lio core ref
>> is dropped. We then know that once we signal to the guest that the cmd
>> is completed that if it queues a new command it will find a free cmd.
>
> It seems weird to me to see a reference to LIO in the description of a
> vhost patch? Since this driver supports more backends than LIO, shouldn't
> the patch description be made more generic?
Please ignore the above comment.
Bart.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session
[not found] ` <1600712588-9514-4-git-send-email-michael.christie@oracle.com>
2020-09-22 1:32 ` [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session kernel test robot
@ 2020-09-23 7:53 ` Dan Carpenter
2020-09-24 6:22 ` Michael S. Tsirkin
2 siblings, 0 replies; 11+ messages in thread
From: Dan Carpenter @ 2020-09-23 7:53 UTC (permalink / raw)
To: kbuild, Mike Christie, martin.petersen, linux-scsi, target-devel,
mst, jasowang, pbonzini, stefanha, virtualization
Cc: kbuild-all, lkp
[-- Attachment #1: Type: text/plain, Size: 6060 bytes --]
Hi Mike,
url: https://github.com/0day-ci/linux/commits/Mike-Christie/vhost-scsi-fixes-and-cleanups/20200922-031251
base: https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git linux-next
config: i386-randconfig-m021-20200923 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
smatch warnings:
drivers/vhost/scsi.c:606 vhost_scsi_get_cmd() error: uninitialized symbol 'cpu'.
# https://github.com/0day-ci/linux/commit/aef0e1e9298ab68f2d7bdf1afb9a376641b993d5
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Mike-Christie/vhost-scsi-fixes-and-cleanups/20200922-031251
git checkout aef0e1e9298ab68f2d7bdf1afb9a376641b993d5
vim +/cpu +606 drivers/vhost/scsi.c
1a1ff8256af679c8 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 572 static struct vhost_scsi_cmd *
aef0e1e9298ab68f drivers/vhost/scsi.c Mike Christie 2020-09-21 573 vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
95e7c4341b8e28da drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 574 unsigned char *cdb, u64 scsi_tag, u16 lun, u8 task_attr,
95e7c4341b8e28da drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 575 u32 exp_data_len, int data_direction)
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 576 {
aef0e1e9298ab68f drivers/vhost/scsi.c Mike Christie 2020-09-21 577 struct vhost_scsi_virtqueue *svq = container_of(vq,
aef0e1e9298ab68f drivers/vhost/scsi.c Mike Christie 2020-09-21 578 struct vhost_scsi_virtqueue, vq);
1a1ff8256af679c8 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 579 struct vhost_scsi_cmd *cmd;
1a1ff8256af679c8 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 580 struct vhost_scsi_nexus *tv_nexus;
b1935f687bb93b20 drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 581 struct scatterlist *sg, *prot_sg;
3aee26b4ae91048c drivers/vhost/scsi.c Nicholas Bellinger 2013-06-21 582 struct page **pages;
10e9cbb6b531117b drivers/vhost/scsi.c Matthew Wilcox 2018-06-12 583 int tag, cpu;
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 584
9871831283e79575 drivers/vhost/scsi.c Asias He 2013-05-06 585 tv_nexus = tpg->tpg_nexus;
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 586 if (!tv_nexus) {
1a1ff8256af679c8 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 587 pr_err("Unable to locate active struct vhost_scsi_nexus\n");
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 588 return ERR_PTR(-EIO);
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 589 }
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 590
aef0e1e9298ab68f drivers/vhost/scsi.c Mike Christie 2020-09-21 591 tag = sbitmap_get(&svq->scsi_tags, 0, false);
4a47d3a1ff10e564 drivers/vhost/scsi.c Nicholas Bellinger 2013-09-23 592 if (tag < 0) {
1a1ff8256af679c8 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 593 pr_err("Unable to obtain tag for vhost_scsi_cmd\n");
4a47d3a1ff10e564 drivers/vhost/scsi.c Nicholas Bellinger 2013-09-23 594 return ERR_PTR(-ENOMEM);
4a47d3a1ff10e564 drivers/vhost/scsi.c Nicholas Bellinger 2013-09-23 595 }
4a47d3a1ff10e564 drivers/vhost/scsi.c Nicholas Bellinger 2013-09-23 596
aef0e1e9298ab68f drivers/vhost/scsi.c Mike Christie 2020-09-21 597 cmd = &svq->scsi_cmds[tag];
3aee26b4ae91048c drivers/vhost/scsi.c Nicholas Bellinger 2013-06-21 598 sg = cmd->tvc_sgl;
b1935f687bb93b20 drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 599 prot_sg = cmd->tvc_prot_sgl;
3aee26b4ae91048c drivers/vhost/scsi.c Nicholas Bellinger 2013-06-21 600 pages = cmd->tvc_upages;
473f0b15a4c97d39 drivers/vhost/scsi.c Markus Elfring 2017-05-20 601 memset(cmd, 0, sizeof(*cmd));
3aee26b4ae91048c drivers/vhost/scsi.c Nicholas Bellinger 2013-06-21 602 cmd->tvc_sgl = sg;
b1935f687bb93b20 drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 603 cmd->tvc_prot_sgl = prot_sg;
3aee26b4ae91048c drivers/vhost/scsi.c Nicholas Bellinger 2013-06-21 604 cmd->tvc_upages = pages;
4824d3bfb9097ac5 drivers/vhost/scsi.c Nicholas Bellinger 2013-06-07 605 cmd->tvc_se_cmd.map_tag = tag;
10e9cbb6b531117b drivers/vhost/scsi.c Matthew Wilcox 2018-06-12 @606 cmd->tvc_se_cmd.map_cpu = cpu;
"cpu" is never initialized.
95e7c4341b8e28da drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 607 cmd->tvc_tag = scsi_tag;
95e7c4341b8e28da drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 608 cmd->tvc_lun = lun;
95e7c4341b8e28da drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 609 cmd->tvc_task_attr = task_attr;
3c63f66a0dcdd6cb drivers/vhost/scsi.c Asias He 2013-05-06 610 cmd->tvc_exp_data_len = exp_data_len;
3c63f66a0dcdd6cb drivers/vhost/scsi.c Asias He 2013-05-06 611 cmd->tvc_data_direction = data_direction;
3c63f66a0dcdd6cb drivers/vhost/scsi.c Asias He 2013-05-06 612 cmd->tvc_nexus = tv_nexus;
1a1ff8256af679c8 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 613 cmd->inflight = vhost_scsi_get_inflight(vq);
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 614
1a1ff8256af679c8 drivers/vhost/scsi.c Nicholas Bellinger 2015-01-31 615 memcpy(cmd->tvc_cdb, cdb, VHOST_SCSI_MAX_CDB_SIZE);
95e7c4341b8e28da drivers/vhost/scsi.c Nicholas Bellinger 2014-02-22 616
3c63f66a0dcdd6cb drivers/vhost/scsi.c Asias He 2013-05-06 617 return cmd;
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18 618 }
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 40631 bytes --]
[-- Attachment #3: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 5/8] vhost scsi: add lun parser helper
[not found] ` <1600712588-9514-6-git-send-email-michael.christie@oracle.com>
@ 2020-09-23 10:23 ` Paolo Bonzini
0 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2020-09-23 10:23 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
jasowang, stefanha, virtualization
On 21/09/20 20:23, Mike Christie wrote:
> Move code to parse lun from req's lun_buf to helper, so tmf code
> can use it in the next patch.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
> drivers/vhost/scsi.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index 26d0f75..736ce19 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -899,6 +899,11 @@ static void vhost_scsi_submission_work(struct work_struct *work)
> return ret;
> }
>
> +static u16 vhost_buf_to_lun(u8 *lun_buf)
> +{
> + return ((lun_buf[2] << 8) | lun_buf[3]) & 0x3FFF;
> +}
> +
> static void
> vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
> {
> @@ -1037,12 +1042,12 @@ static void vhost_scsi_submission_work(struct work_struct *work)
> tag = vhost64_to_cpu(vq, v_req_pi.tag);
> task_attr = v_req_pi.task_attr;
> cdb = &v_req_pi.cdb[0];
> - lun = ((v_req_pi.lun[2] << 8) | v_req_pi.lun[3]) & 0x3FFF;
> + lun = vhost_buf_to_lun(v_req_pi.lun);
> } else {
> tag = vhost64_to_cpu(vq, v_req.tag);
> task_attr = v_req.task_attr;
> cdb = &v_req.cdb[0];
> - lun = ((v_req.lun[2] << 8) | v_req.lun[3]) & 0x3FFF;
> + lun = vhost_buf_to_lun(v_req.lun);
> }
> /*
> * Check that the received CDB size does not exceeded our
>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session
[not found] ` <1600712588-9514-4-git-send-email-michael.christie@oracle.com>
2020-09-22 1:32 ` [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session kernel test robot
2020-09-23 7:53 ` Dan Carpenter
@ 2020-09-24 6:22 ` Michael S. Tsirkin
2 siblings, 0 replies; 11+ messages in thread
From: Michael S. Tsirkin @ 2020-09-24 6:22 UTC (permalink / raw)
To: Mike Christie
Cc: martin.petersen, linux-scsi, virtualization, target-devel,
stefanha, pbonzini
On Mon, Sep 21, 2020 at 01:23:03PM -0500, Mike Christie wrote:
> We currently are limited to 256 cmds per session. This leads to problems
> where if the user has increased virtqueue_size to more than 2 or
> cmd_per_lun to more than 256 vhost_scsi_get_tag can fail and the guest
> will get IO errors.
>
> This patch moves the cmd allocation to per vq so we can easily match
> whatever the user has specified for num_queues and
> virtqueue_size/cmd_per_lun. It also makes it easier to control how much
> memory we preallocate. For cases, where perf is not as important and
> we can use the current defaults (1 vq and 128 cmds per vq) memory use
> from preallocate cmds is cut in half. For cases, where we are willing
> to use more memory for higher perf, cmd mem use will now increase as
> the num queues and queue depth increases.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
> drivers/vhost/scsi.c | 204 ++++++++++++++++++++++++++++++++-------------------
> 1 file changed, 127 insertions(+), 77 deletions(-)
>
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index b22adf0..13311b8 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -52,7 +52,6 @@
> #define VHOST_SCSI_VERSION "v0.1"
> #define VHOST_SCSI_NAMELEN 256
> #define VHOST_SCSI_MAX_CDB_SIZE 32
> -#define VHOST_SCSI_DEFAULT_TAGS 256
> #define VHOST_SCSI_PREALLOC_SGLS 2048
> #define VHOST_SCSI_PREALLOC_UPAGES 2048
> #define VHOST_SCSI_PREALLOC_PROT_SGLS 2048
> @@ -189,6 +188,9 @@ struct vhost_scsi_virtqueue {
> * Writers must also take dev mutex and flush under it.
> */
> int inflight_idx;
> + struct vhost_scsi_cmd *scsi_cmds;
> + struct sbitmap scsi_tags;
> + int max_cmds;
> };
>
> struct vhost_scsi {
> @@ -324,7 +326,9 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
> {
> struct vhost_scsi_cmd *tv_cmd = container_of(se_cmd,
> struct vhost_scsi_cmd, tvc_se_cmd);
> - struct se_session *se_sess = tv_cmd->tvc_nexus->tvn_se_sess;
> + struct vhost_scsi_virtqueue *svq = container_of(tv_cmd->tvc_vq,
> + struct vhost_scsi_virtqueue, vq);
> + struct vhost_scsi_inflight *inflight = tv_cmd->inflight;
> int i;
>
> if (tv_cmd->tvc_sgl_count) {
> @@ -336,8 +340,8 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
> put_page(sg_page(&tv_cmd->tvc_prot_sgl[i]));
> }
>
> - vhost_scsi_put_inflight(tv_cmd->inflight);
> - target_free_tag(se_sess, se_cmd);
> + sbitmap_clear_bit(&svq->scsi_tags, se_cmd->map_tag);
> + vhost_scsi_put_inflight(inflight);
> }
>
> static u32 vhost_scsi_sess_get_index(struct se_session *se_sess)
> @@ -566,13 +570,14 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
> }
>
> static struct vhost_scsi_cmd *
> -vhost_scsi_get_tag(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
> +vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
> unsigned char *cdb, u64 scsi_tag, u16 lun, u8 task_attr,
> u32 exp_data_len, int data_direction)
> {
> + struct vhost_scsi_virtqueue *svq = container_of(vq,
> + struct vhost_scsi_virtqueue, vq);
> struct vhost_scsi_cmd *cmd;
> struct vhost_scsi_nexus *tv_nexus;
> - struct se_session *se_sess;
> struct scatterlist *sg, *prot_sg;
> struct page **pages;
> int tag, cpu;
> @@ -582,15 +587,14 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
> pr_err("Unable to locate active struct vhost_scsi_nexus\n");
> return ERR_PTR(-EIO);
> }
> - se_sess = tv_nexus->tvn_se_sess;
>
> - tag = sbitmap_queue_get(&se_sess->sess_tag_pool, &cpu);
> + tag = sbitmap_get(&svq->scsi_tags, 0, false);
> if (tag < 0) {
> pr_err("Unable to obtain tag for vhost_scsi_cmd\n");
> return ERR_PTR(-ENOMEM);
> }
After this change, cpu is uninitialized.
>
> - cmd = &((struct vhost_scsi_cmd *)se_sess->sess_cmd_map)[tag];
> + cmd = &svq->scsi_cmds[tag];
> sg = cmd->tvc_sgl;
> prot_sg = cmd->tvc_prot_sgl;
> pages = cmd->tvc_upages;
> @@ -1065,11 +1069,11 @@ static void vhost_scsi_submission_work(struct work_struct *work)
> scsi_command_size(cdb), VHOST_SCSI_MAX_CDB_SIZE);
> goto err;
> }
> - cmd = vhost_scsi_get_tag(vq, tpg, cdb, tag, lun, task_attr,
> + cmd = vhost_scsi_get_cmd(vq, tpg, cdb, tag, lun, task_attr,
> exp_data_len + prot_bytes,
> data_direction);
> if (IS_ERR(cmd)) {
> - vq_err(vq, "vhost_scsi_get_tag failed %ld\n",
> + vq_err(vq, "vhost_scsi_get_cmd failed %ld\n",
> PTR_ERR(cmd));
> goto err;
> }
> @@ -1373,6 +1377,83 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
> wait_for_completion(&old_inflight[i]->comp);
> }
>
> +static void vhost_scsi_destroy_vq_cmds(struct vhost_virtqueue *vq)
> +{
> + struct vhost_scsi_virtqueue *svq = container_of(vq,
> + struct vhost_scsi_virtqueue, vq);
> + struct vhost_scsi_cmd *tv_cmd;
> + unsigned int i;
> +
> + if (!svq->scsi_cmds)
> + return;
> +
> + for (i = 0; i < svq->max_cmds; i++) {
> + tv_cmd = &svq->scsi_cmds[i];
> +
> + kfree(tv_cmd->tvc_sgl);
> + kfree(tv_cmd->tvc_prot_sgl);
> + kfree(tv_cmd->tvc_upages);
> + }
> +
> + sbitmap_free(&svq->scsi_tags);
> + kfree(svq->scsi_cmds);
> + svq->scsi_cmds = NULL;
> +}
> +
> +static int vhost_scsi_setup_vq_cmds(struct vhost_virtqueue *vq, int max_cmds)
> +{
> + struct vhost_scsi_virtqueue *svq = container_of(vq,
> + struct vhost_scsi_virtqueue, vq);
> + struct vhost_scsi_cmd *tv_cmd;
> + unsigned int i;
> +
> + if (svq->scsi_cmds)
> + return 0;
> +
> + if (sbitmap_init_node(&svq->scsi_tags, max_cmds, -1, GFP_KERNEL,
> + NUMA_NO_NODE))
> + return -ENOMEM;
> + svq->max_cmds = max_cmds;
> +
> + svq->scsi_cmds = kcalloc(max_cmds, sizeof(*tv_cmd), GFP_KERNEL);
> + if (!svq->scsi_cmds) {
> + sbitmap_free(&svq->scsi_tags);
> + return -ENOMEM;
> + }
> +
> + for (i = 0; i < max_cmds; i++) {
> + tv_cmd = &svq->scsi_cmds[i];
> +
> + tv_cmd->tvc_sgl = kcalloc(VHOST_SCSI_PREALLOC_SGLS,
> + sizeof(struct scatterlist),
> + GFP_KERNEL);
> + if (!tv_cmd->tvc_sgl) {
> + pr_err("Unable to allocate tv_cmd->tvc_sgl\n");
> + goto out;
> + }
> +
> + tv_cmd->tvc_upages = kcalloc(VHOST_SCSI_PREALLOC_UPAGES,
> + sizeof(struct page *),
> + GFP_KERNEL);
> + if (!tv_cmd->tvc_upages) {
> + pr_err("Unable to allocate tv_cmd->tvc_upages\n");
> + goto out;
> + }
> +
> + tv_cmd->tvc_prot_sgl = kcalloc(VHOST_SCSI_PREALLOC_PROT_SGLS,
> + sizeof(struct scatterlist),
> + GFP_KERNEL);
> + if (!tv_cmd->tvc_prot_sgl) {
> + pr_err("Unable to allocate tv_cmd->tvc_prot_sgl\n");
> + goto out;
> + }
> + }
> + return 0;
> +out:
> + vhost_scsi_destroy_vq_cmds(vq);
> + return -ENOMEM;
> +}
> +
> /*
> * Called from vhost_scsi_ioctl() context to walk the list of available
> * vhost_scsi_tpg with an active struct vhost_scsi_nexus
> @@ -1427,10 +1508,9 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
>
> if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
> if (vs->vs_tpg && vs->vs_tpg[tpg->tport_tpgt]) {
> - kfree(vs_tpg);
> mutex_unlock(&tpg->tv_tpg_mutex);
> ret = -EEXIST;
> - goto out;
> + goto undepend;
> }
> /*
> * In order to ensure individual vhost-scsi configfs
> @@ -1442,9 +1522,8 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
> ret = target_depend_item(&se_tpg->tpg_group.cg_item);
> if (ret) {
> pr_warn("target_depend_item() failed: %d\n", ret);
> - kfree(vs_tpg);
> mutex_unlock(&tpg->tv_tpg_mutex);
> - goto out;
> + goto undepend;
> }
> tpg->tv_tpg_vhost_count++;
> tpg->vhost_scsi = vs;
> @@ -1457,6 +1536,16 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
> if (match) {
> memcpy(vs->vs_vhost_wwpn, t->vhost_wwpn,
> sizeof(vs->vs_vhost_wwpn));
> +
> + for (i = VHOST_SCSI_VQ_IO; i < VHOST_SCSI_MAX_VQ; i++) {
> + vq = &vs->vqs[i].vq;
> + if (!vhost_vq_is_setup(vq))
> + continue;
> +
> + if (vhost_scsi_setup_vq_cmds(vq, vq->num))
> + goto destroy_vq_cmds;
> + }
> +
> for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
> vq = &vs->vqs[i].vq;
> mutex_lock(&vq->mutex);
> @@ -1476,7 +1565,22 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
> vhost_scsi_flush(vs);
> kfree(vs->vs_tpg);
> vs->vs_tpg = vs_tpg;
> + goto out;
>
> +destroy_vq_cmds:
> + for (i--; i >= VHOST_SCSI_VQ_IO; i--) {
> + if (!vhost_vq_get_backend(&vs->vqs[i].vq))
> + vhost_scsi_destroy_vq_cmds(&vs->vqs[i].vq);
> + }
> +undepend:
> + for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) {
> + tpg = vs_tpg[i];
> + if (tpg) {
> + tpg->tv_tpg_vhost_count--;
> + target_undepend_item(&tpg->se_tpg.tpg_group.cg_item);
> + }
> + }
> + kfree(vs_tpg);
> out:
> mutex_unlock(&vs->dev.mutex);
> mutex_unlock(&vhost_scsi_mutex);
> @@ -1549,6 +1653,12 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
> mutex_lock(&vq->mutex);
> vhost_vq_set_backend(vq, NULL);
> mutex_unlock(&vq->mutex);
> + /*
> + * Make sure cmds are not running before tearing them
> + * down.
> + */
> + vhost_scsi_flush(vs);
> + vhost_scsi_destroy_vq_cmds(vq);
> }
> }
> /*
> @@ -1842,23 +1952,6 @@ static void vhost_scsi_port_unlink(struct se_portal_group *se_tpg,
> mutex_unlock(&vhost_scsi_mutex);
> }
>
> -static void vhost_scsi_free_cmd_map_res(struct se_session *se_sess)
> -{
> - struct vhost_scsi_cmd *tv_cmd;
> - unsigned int i;
> -
> - if (!se_sess->sess_cmd_map)
> - return;
> -
> - for (i = 0; i < VHOST_SCSI_DEFAULT_TAGS; i++) {
> - tv_cmd = &((struct vhost_scsi_cmd *)se_sess->sess_cmd_map)[i];
> -
> - kfree(tv_cmd->tvc_sgl);
> - kfree(tv_cmd->tvc_prot_sgl);
> - kfree(tv_cmd->tvc_upages);
> - }
> -}
> -
> static ssize_t vhost_scsi_tpg_attrib_fabric_prot_type_store(
> struct config_item *item, const char *page, size_t count)
> {
> @@ -1898,45 +1991,6 @@ static ssize_t vhost_scsi_tpg_attrib_fabric_prot_type_show(
> NULL,
> };
>
> -static int vhost_scsi_nexus_cb(struct se_portal_group *se_tpg,
> - struct se_session *se_sess, void *p)
> -{
> - struct vhost_scsi_cmd *tv_cmd;
> - unsigned int i;
> -
> - for (i = 0; i < VHOST_SCSI_DEFAULT_TAGS; i++) {
> - tv_cmd = &((struct vhost_scsi_cmd *)se_sess->sess_cmd_map)[i];
> -
> - tv_cmd->tvc_sgl = kcalloc(VHOST_SCSI_PREALLOC_SGLS,
> - sizeof(struct scatterlist),
> - GFP_KERNEL);
> - if (!tv_cmd->tvc_sgl) {
> - pr_err("Unable to allocate tv_cmd->tvc_sgl\n");
> - goto out;
> - }
> -
> - tv_cmd->tvc_upages = kcalloc(VHOST_SCSI_PREALLOC_UPAGES,
> - sizeof(struct page *),
> - GFP_KERNEL);
> - if (!tv_cmd->tvc_upages) {
> - pr_err("Unable to allocate tv_cmd->tvc_upages\n");
> - goto out;
> - }
> -
> - tv_cmd->tvc_prot_sgl = kcalloc(VHOST_SCSI_PREALLOC_PROT_SGLS,
> - sizeof(struct scatterlist),
> - GFP_KERNEL);
> - if (!tv_cmd->tvc_prot_sgl) {
> - pr_err("Unable to allocate tv_cmd->tvc_prot_sgl\n");
> - goto out;
> - }
> - }
> - return 0;
> -out:
> - vhost_scsi_free_cmd_map_res(se_sess);
> - return -ENOMEM;
> -}
> -
> static int vhost_scsi_make_nexus(struct vhost_scsi_tpg *tpg,
> const char *name)
> {
> @@ -1960,12 +2014,9 @@ static int vhost_scsi_make_nexus(struct vhost_scsi_tpg *tpg,
> * struct se_node_acl for the vhost_scsi struct se_portal_group with
> * the SCSI Initiator port name of the passed configfs group 'name'.
> */
> - tv_nexus->tvn_se_sess = target_setup_session(&tpg->se_tpg,
> - VHOST_SCSI_DEFAULT_TAGS,
> - sizeof(struct vhost_scsi_cmd),
> + tv_nexus->tvn_se_sess = target_setup_session(&tpg->se_tpg, 0, 0,
> TARGET_PROT_DIN_PASS | TARGET_PROT_DOUT_PASS,
> - (unsigned char *)name, tv_nexus,
> - vhost_scsi_nexus_cb);
> + (unsigned char *)name, tv_nexus, NULL);
> if (IS_ERR(tv_nexus->tvn_se_sess)) {
> mutex_unlock(&tpg->tv_tpg_mutex);
> kfree(tv_nexus);
> @@ -2015,7 +2066,6 @@ static int vhost_scsi_drop_nexus(struct vhost_scsi_tpg *tpg)
> " %s Initiator Port: %s\n", vhost_scsi_dump_proto_id(tpg->tport),
> tv_nexus->tvn_se_sess->se_node_acl->initiatorname);
>
> - vhost_scsi_free_cmd_map_res(se_sess);
> /*
> * Release the SCSI I_T Nexus to the emulated vhost Target Port
> */
> --
> 1.8.3.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/8] vhost: add helper to check if a vq has been setup
[not found] ` <63094bae-1f26-c21e-9b3c-3a6aa99a7e24@oracle.com>
@ 2020-09-24 7:22 ` Jason Wang
0 siblings, 0 replies; 11+ messages in thread
From: Jason Wang @ 2020-09-24 7:22 UTC (permalink / raw)
To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
pbonzini, stefanha, virtualization
On 2020/9/24 上午3:12, Mike Christie wrote:
> On 9/21/20 9:02 PM, Jason Wang wrote:
>> On 2020/9/22 上午2:23, Mike Christie wrote:
>>> This adds a helper check if a vq has been setup. The next patches
>>> will use this when we move the vhost scsi cmd preallocation from per
>>> session to per vq. In the per vq case, we only want to allocate cmds
>>> for vqs that have actually been setup and not for all the possible
>>> vqs.
>>>
>>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>>> ---
>>> drivers/vhost/vhost.c | 9 +++++++++
>>> drivers/vhost/vhost.h | 1 +
>>> 2 files changed, 10 insertions(+)
>>>
>>> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
>>> index b45519c..5dd9eb1 100644
>>> --- a/drivers/vhost/vhost.c
>>> +++ b/drivers/vhost/vhost.c
>>> @@ -305,6 +305,15 @@ static void vhost_vring_call_reset(struct vhost_vring_call *call_ctx)
>>> spin_lock_init(&call_ctx->ctx_lock);
>>> }
>>> +bool vhost_vq_is_setup(struct vhost_virtqueue *vq)
>>> +{
>>> + if (vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq))
>>> + return true;
>>> + else
>>> + return false;
>>> +}
>>> +EXPORT_SYMBOL_GPL(vhost_vq_is_setup);
>>
>> This is probably ok but I wonder maybe we should have something like what vDPA did (VHOST_SET_VRING_ENABLE) to match virtio 1.0 device definition.
> It looks like I can make that work. Some questions:
>
> 1. Do you mean a generic VHOST_SET_VRING_ENABLE or a SCSI specific one VHOST_SCSI_SET_VRING_ENABLE?
It would be better if we can make it generic.
>
> 2. I can see the VHOST_VDPA_SET_VRING_ENABLE kernel code and the vhost_set_vring_enable qemu code, so I have an idea of how it should work for vhost scsi. However, I'm not sure the requirements for a generic VHOST_SET_VRING_ENABLE if that is what you meant. I could not find it in the spec either. Could you send me a pointer to the section?
In the spec, for PCI, it's the queue_enable for modern device.
>
> For example, for vhost-net we seem to enable a device in the VHOST_NET_SET_BACKEND ioctl, so I'm not sure what behavior should be or needs to be implemented for net and vsock.
Yes, but VHOST_NET_SET_BACKEND is for the whole device not a specific
virtqueue.
Thanks
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2020-09-24 7:23 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1600712588-9514-1-git-send-email-michael.christie@oracle.com>
[not found] ` <1600712588-9514-2-git-send-email-michael.christie@oracle.com>
2020-09-22 1:58 ` [PATCH 1/8] vhost vdpa: fix vhost_vdpa_open error handling Jason Wang
[not found] ` <1600712588-9514-8-git-send-email-michael.christie@oracle.com>
2020-09-22 2:01 ` [PATCH 7/8] vhost: remove work arg from vhost_work_flush Jason Wang
[not found] ` <1600712588-9514-3-git-send-email-michael.christie@oracle.com>
2020-09-22 2:02 ` [PATCH 2/8] vhost: add helper to check if a vq has been setup Jason Wang
[not found] ` <63094bae-1f26-c21e-9b3c-3a6aa99a7e24@oracle.com>
2020-09-24 7:22 ` Jason Wang
2020-09-22 2:45 ` Bart Van Assche
[not found] ` <1600712588-9514-5-git-send-email-michael.christie@oracle.com>
2020-09-22 2:48 ` [PATCH 4/8] vhost scsi: fix cmd completion race Bart Van Assche
2020-09-22 2:51 ` Bart Van Assche
[not found] ` <1600712588-9514-4-git-send-email-michael.christie@oracle.com>
2020-09-22 1:32 ` [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session kernel test robot
2020-09-23 7:53 ` Dan Carpenter
2020-09-24 6:22 ` Michael S. Tsirkin
[not found] ` <1600712588-9514-6-git-send-email-michael.christie@oracle.com>
2020-09-23 10:23 ` [PATCH 5/8] vhost scsi: add lun parser helper Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).