* Re: [PATCH v2 15/19] vhost-vdpa: support ASID based IOTLB API [not found] ` <20220330180436.24644-16-gdawar@xilinx.com> @ 2022-04-01 4:24 ` Jason Wang [not found] ` <BY5PR02MB698077E814EC867CEBAE2211B1FD9@BY5PR02MB6980.namprd02.prod.outlook.com> 0 siblings, 1 reply; 6+ messages in thread From: Jason Wang @ 2022-04-01 4:24 UTC (permalink / raw) To: Gautam Dawar Cc: tanuj.kamde, kvm, Michael S. Tsirkin, virtualization, Wu Zongyong, Si-Wei Liu, pabloc, Eli Cohen, Zhang Min, eperezma, Martin Petrus Hubertus Habets, Xie Yongji, dinang, habetsm.xilinx, Longpeng, Dan Carpenter, Gautam Dawar, Christophe JAILLET, netdev, linux-kernel, ecree.xilinx, Harpreet Singh Anand, Martin Porter, Zhu Lingshan On Thu, Mar 31, 2022 at 2:17 AM Gautam Dawar <gautam.dawar@xilinx.com> wrote: > > This patch extends the vhost-vdpa to support ASID based IOTLB API. The > vhost-vdpa device will allocated multiple IOTLBs for vDPA device that > supports multiple address spaces. The IOTLBs and vDPA device memory > mappings is determined and maintained through ASID. > > Note that we still don't support vDPA device with more than one > address spaces that depends on platform IOMMU. This work will be done > by moving the IOMMU logic from vhost-vDPA to vDPA device driver. > > Signed-off-by: Jason Wang <jasowang@redhat.com> > Signed-off-by: Gautam Dawar <gdawar@xilinx.com> > --- > drivers/vhost/vdpa.c | 109 ++++++++++++++++++++++++++++++++++-------- > drivers/vhost/vhost.c | 2 +- > 2 files changed, 91 insertions(+), 20 deletions(-) > > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c > index 6c7ee0f18892..1f1d1c425573 100644 > --- a/drivers/vhost/vdpa.c > +++ b/drivers/vhost/vdpa.c > @@ -28,7 +28,8 @@ > enum { > VHOST_VDPA_BACKEND_FEATURES = > (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2) | > - (1ULL << VHOST_BACKEND_F_IOTLB_BATCH), > + (1ULL << VHOST_BACKEND_F_IOTLB_BATCH) | > + (1ULL << VHOST_BACKEND_F_IOTLB_ASID), > }; > > #define VHOST_VDPA_DEV_MAX (1U << MINORBITS) > @@ -57,12 +58,20 @@ struct vhost_vdpa { > struct eventfd_ctx *config_ctx; > int in_batch; > struct vdpa_iova_range range; > + u32 batch_asid; > }; > > static DEFINE_IDA(vhost_vdpa_ida); > > static dev_t vhost_vdpa_major; > > +static inline u32 iotlb_to_asid(struct vhost_iotlb *iotlb) > +{ > + struct vhost_vdpa_as *as = container_of(iotlb, struct > + vhost_vdpa_as, iotlb); > + return as->id; > +} > + > static struct vhost_vdpa_as *asid_to_as(struct vhost_vdpa *v, u32 asid) > { > struct hlist_head *head = &v->as[asid % VHOST_VDPA_IOTLB_BUCKETS]; > @@ -75,6 +84,16 @@ static struct vhost_vdpa_as *asid_to_as(struct vhost_vdpa *v, u32 asid) > return NULL; > } > > +static struct vhost_iotlb *asid_to_iotlb(struct vhost_vdpa *v, u32 asid) > +{ > + struct vhost_vdpa_as *as = asid_to_as(v, asid); > + > + if (!as) > + return NULL; > + > + return &as->iotlb; > +} > + > static struct vhost_vdpa_as *vhost_vdpa_alloc_as(struct vhost_vdpa *v, u32 asid) > { > struct hlist_head *head = &v->as[asid % VHOST_VDPA_IOTLB_BUCKETS]; > @@ -83,6 +102,9 @@ static struct vhost_vdpa_as *vhost_vdpa_alloc_as(struct vhost_vdpa *v, u32 asid) > if (asid_to_as(v, asid)) > return NULL; > > + if (asid >= v->vdpa->nas) > + return NULL; > + > as = kmalloc(sizeof(*as), GFP_KERNEL); > if (!as) > return NULL; > @@ -94,6 +116,17 @@ static struct vhost_vdpa_as *vhost_vdpa_alloc_as(struct vhost_vdpa *v, u32 asid) > return as; > } > > +static struct vhost_vdpa_as *vhost_vdpa_find_alloc_as(struct vhost_vdpa *v, > + u32 asid) > +{ > + struct vhost_vdpa_as *as = asid_to_as(v, asid); > + > + if (as) > + return as; > + > + return vhost_vdpa_alloc_as(v, asid); > +} > + > static int vhost_vdpa_remove_as(struct vhost_vdpa *v, u32 asid) > { > struct vhost_vdpa_as *as = asid_to_as(v, asid); > @@ -692,6 +725,7 @@ static int vhost_vdpa_map(struct vhost_vdpa *v, struct vhost_iotlb *iotlb, > struct vhost_dev *dev = &v->vdev; > struct vdpa_device *vdpa = v->vdpa; > const struct vdpa_config_ops *ops = vdpa->config; > + u32 asid = iotlb_to_asid(iotlb); > int r = 0; > > r = vhost_iotlb_add_range_ctx(iotlb, iova, iova + size - 1, > @@ -700,10 +734,10 @@ static int vhost_vdpa_map(struct vhost_vdpa *v, struct vhost_iotlb *iotlb, > return r; > > if (ops->dma_map) { > - r = ops->dma_map(vdpa, 0, iova, size, pa, perm, opaque); > + r = ops->dma_map(vdpa, asid, iova, size, pa, perm, opaque); > } else if (ops->set_map) { > if (!v->in_batch) > - r = ops->set_map(vdpa, 0, iotlb); > + r = ops->set_map(vdpa, asid, iotlb); > } else { > r = iommu_map(v->domain, iova, pa, size, > perm_to_iommu_flags(perm)); > @@ -725,17 +759,24 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v, > { > struct vdpa_device *vdpa = v->vdpa; > const struct vdpa_config_ops *ops = vdpa->config; > + u32 asid = iotlb_to_asid(iotlb); > > vhost_vdpa_iotlb_unmap(v, iotlb, iova, iova + size - 1); > > if (ops->dma_map) { > - ops->dma_unmap(vdpa, 0, iova, size); > + ops->dma_unmap(vdpa, asid, iova, size); > } else if (ops->set_map) { > if (!v->in_batch) > - ops->set_map(vdpa, 0, iotlb); > + ops->set_map(vdpa, asid, iotlb); > } else { > iommu_unmap(v->domain, iova, size); > } > + > + /* If we are in the middle of batch processing, delay the free > + * of AS until BATCH_END. > + */ > + if (!v->in_batch && !iotlb->nmaps) > + vhost_vdpa_remove_as(v, asid); > } > > static int vhost_vdpa_va_map(struct vhost_vdpa *v, > @@ -943,19 +984,38 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid, > struct vhost_vdpa *v = container_of(dev, struct vhost_vdpa, vdev); > struct vdpa_device *vdpa = v->vdpa; > const struct vdpa_config_ops *ops = vdpa->config; > - struct vhost_vdpa_as *as = asid_to_as(v, 0); > - struct vhost_iotlb *iotlb = &as->iotlb; > + struct vhost_iotlb *iotlb = NULL; > + struct vhost_vdpa_as *as = NULL; > int r = 0; > > - if (asid != 0) > - return -EINVAL; > - > mutex_lock(&dev->mutex); > > r = vhost_dev_check_owner(dev); > if (r) > goto unlock; > > + if (msg->type == VHOST_IOTLB_UPDATE || > + msg->type == VHOST_IOTLB_BATCH_BEGIN) { > + as = vhost_vdpa_find_alloc_as(v, asid); I wonder if it's better to mandate the ASID to [0, dev->nas), otherwise user space is free to use arbitrary IDs which may exceeds the #address spaces that is supported by the device. Thanks > + if (!as) { > + dev_err(&v->dev, "can't find and alloc asid %d\n", > + asid); > + return -EINVAL; > + } > + iotlb = &as->iotlb; > + } else > + iotlb = asid_to_iotlb(v, asid); > + > + if ((v->in_batch && v->batch_asid != asid) || !iotlb) { > + if (v->in_batch && v->batch_asid != asid) { > + dev_info(&v->dev, "batch id %d asid %d\n", > + v->batch_asid, asid); > + } > + if (!iotlb) > + dev_err(&v->dev, "no iotlb for asid %d\n", asid); > + return -EINVAL; > + } > + > switch (msg->type) { > case VHOST_IOTLB_UPDATE: > r = vhost_vdpa_process_iotlb_update(v, iotlb, msg); > @@ -964,12 +1024,15 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid, > vhost_vdpa_unmap(v, iotlb, msg->iova, msg->size); > break; > case VHOST_IOTLB_BATCH_BEGIN: > + v->batch_asid = asid; > v->in_batch = true; > break; > case VHOST_IOTLB_BATCH_END: > if (v->in_batch && ops->set_map) > - ops->set_map(vdpa, 0, iotlb); > + ops->set_map(vdpa, asid, iotlb); > v->in_batch = false; > + if (!iotlb->nmaps) > + vhost_vdpa_remove_as(v, asid); > break; > default: > r = -EINVAL; > @@ -1057,9 +1120,17 @@ static void vhost_vdpa_set_iova_range(struct vhost_vdpa *v) > > static void vhost_vdpa_cleanup(struct vhost_vdpa *v) > { > + struct vhost_vdpa_as *as; > + u32 asid; > + > vhost_dev_cleanup(&v->vdev); > kfree(v->vdev.vqs); > - vhost_vdpa_remove_as(v, 0); > + > + for (asid = 0; asid < v->vdpa->nas; asid++) { > + as = asid_to_as(v, asid); > + if (as) > + vhost_vdpa_remove_as(v, asid); > + } > } > > static int vhost_vdpa_open(struct inode *inode, struct file *filep) > @@ -1095,12 +1166,9 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep) > vhost_dev_init(dev, vqs, nvqs, 0, 0, 0, false, > vhost_vdpa_process_iotlb_msg); > > - if (!vhost_vdpa_alloc_as(v, 0)) > - goto err_alloc_as; > - > r = vhost_vdpa_alloc_domain(v); > if (r) > - goto err_alloc_as; > + goto err_alloc_domain; > > vhost_vdpa_set_iova_range(v); > > @@ -1108,7 +1176,7 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep) > > return 0; > > -err_alloc_as: > +err_alloc_domain: > vhost_vdpa_cleanup(v); > err: > atomic_dec(&v->opened); > @@ -1233,8 +1301,11 @@ static int vhost_vdpa_probe(struct vdpa_device *vdpa) > int minor; > int i, r; > > - /* Only support 1 address space and 1 groups */ > - if (vdpa->ngroups != 1 || vdpa->nas != 1) > + /* We can't support platform IOMMU device with more than 1 > + * group or as > + */ > + if (!ops->set_map && !ops->dma_map && > + (vdpa->ngroups > 1 || vdpa->nas > 1)) > return -EOPNOTSUPP; > > v = kzalloc(sizeof(*v), GFP_KERNEL | __GFP_RETRY_MAYFAIL); > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index d1e58f976f6e..5022c648d9c0 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -1167,7 +1167,7 @@ ssize_t vhost_chr_write_iter(struct vhost_dev *dev, > ret = -EINVAL; > goto done; > } > - offset = sizeof(__u16); > + offset = 0; > } else > offset = sizeof(__u32); > break; > -- > 2.30.1 > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <BY5PR02MB698077E814EC867CEBAE2211B1FD9@BY5PR02MB6980.namprd02.prod.outlook.com>]
* Re: [PATCH v2 15/19] vhost-vdpa: support ASID based IOTLB API [not found] ` <BY5PR02MB698077E814EC867CEBAE2211B1FD9@BY5PR02MB6980.namprd02.prod.outlook.com> @ 2022-05-07 10:24 ` Jason Wang 0 siblings, 0 replies; 6+ messages in thread From: Jason Wang @ 2022-05-07 10:24 UTC (permalink / raw) To: Gautam Dawar Cc: tanuj.kamde@amd.com, kvm, Michael S. Tsirkin, virtualization, Wu Zongyong, Si-Wei Liu, Pablo Cascon Katchadourian, Eli Cohen, Zhang Min, eperezma, Martin Petrus Hubertus Habets, Xie Yongji, Dinan Gunawardena, habetsm.xilinx@gmail.com, Longpeng, Dan Carpenter, netdev, linux-kernel, ecree.xilinx@gmail.com, Harpreet Singh Anand, Martin Porter, Christophe JAILLET, Zhu Lingshan On Thu, Apr 28, 2022 at 2:28 PM Gautam Dawar <gdawar@xilinx.com> wrote: > > -----Original Message----- > From: Jason Wang <jasowang@redhat.com> > Sent: Friday, April 1, 2022 9:55 AM > To: Gautam Dawar <gdawar@xilinx.com> > Cc: Michael S. Tsirkin <mst@redhat.com>; kvm <kvm@vger.kernel.org>; virtualization <virtualization@lists.linux-foundation.org>; netdev <netdev@vger.kernel.org>; linux-kernel <linux-kernel@vger.kernel.org>; Martin Petrus Hubertus Habets <martinh@xilinx.com>; Harpreet Singh Anand <hanand@xilinx.com>; Martin Porter <martinpo@xilinx.com>; Pablo Cascon <pabloc@xilinx.com>; Dinan Gunawardena <dinang@xilinx.com>; tanuj.kamde@amd.com; habetsm.xilinx@gmail.com; ecree.xilinx@gmail.com; eperezma <eperezma@redhat.com>; Gautam Dawar <gdawar@xilinx.com>; Wu Zongyong <wuzongyong@linux.alibaba.com>; Christophe JAILLET <christophe.jaillet@wanadoo.fr>; Eli Cohen <elic@nvidia.com>; Zhu Lingshan <lingshan.zhu@intel.com>; Stefano Garzarella <sgarzare@redhat.com>; Xie Yongji <xieyongji@bytedance.com>; Si-Wei Liu <si-wei.liu@oracle.com>; Parav Pandit <parav@nvidia.com>; Longpeng <longpeng2@huawei.com>; Dan Carpenter <dan.carpenter@oracle.com>; Zhang Min <zhang.min9@zte.com.cn> > Subject: Re: [PATCH v2 15/19] vhost-vdpa: support ASID based IOTLB API > > On Thu, Mar 31, 2022 at 2:17 AM Gautam Dawar <gautam.dawar@xilinx.com> wrote: > > > > This patch extends the vhost-vdpa to support ASID based IOTLB API. The > > vhost-vdpa device will allocated multiple IOTLBs for vDPA device that > > supports multiple address spaces. The IOTLBs and vDPA device memory > > mappings is determined and maintained through ASID. > > > > Note that we still don't support vDPA device with more than one > > address spaces that depends on platform IOMMU. This work will be done > > by moving the IOMMU logic from vhost-vDPA to vDPA device driver. > > > > Signed-off-by: Jason Wang <jasowang@redhat.com> > > Signed-off-by: Gautam Dawar <gdawar@xilinx.com> > > --- > > drivers/vhost/vdpa.c | 109 ++++++++++++++++++++++++++++++++++-------- > > drivers/vhost/vhost.c | 2 +- > > 2 files changed, 91 insertions(+), 20 deletions(-) > > > > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index > > 6c7ee0f18892..1f1d1c425573 100644 > > --- a/drivers/vhost/vdpa.c > > +++ b/drivers/vhost/vdpa.c > > @@ -28,7 +28,8 @@ > > enum { > > VHOST_VDPA_BACKEND_FEATURES = > > (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2) | > > - (1ULL << VHOST_BACKEND_F_IOTLB_BATCH), > > + (1ULL << VHOST_BACKEND_F_IOTLB_BATCH) | > > + (1ULL << VHOST_BACKEND_F_IOTLB_ASID), > > }; > > > > #define VHOST_VDPA_DEV_MAX (1U << MINORBITS) @@ -57,12 +58,20 @@ > > struct vhost_vdpa { > > struct eventfd_ctx *config_ctx; > > int in_batch; > > struct vdpa_iova_range range; > > + u32 batch_asid; > > }; > > > > static DEFINE_IDA(vhost_vdpa_ida); > > > > static dev_t vhost_vdpa_major; > > > > +static inline u32 iotlb_to_asid(struct vhost_iotlb *iotlb) { > > + struct vhost_vdpa_as *as = container_of(iotlb, struct > > + vhost_vdpa_as, iotlb); > > + return as->id; > > +} > > + > > static struct vhost_vdpa_as *asid_to_as(struct vhost_vdpa *v, u32 > > asid) { > > struct hlist_head *head = &v->as[asid % > > VHOST_VDPA_IOTLB_BUCKETS]; @@ -75,6 +84,16 @@ static struct vhost_vdpa_as *asid_to_as(struct vhost_vdpa *v, u32 asid) > > return NULL; > > } > > > > +static struct vhost_iotlb *asid_to_iotlb(struct vhost_vdpa *v, u32 > > +asid) { > > + struct vhost_vdpa_as *as = asid_to_as(v, asid); > > + > > + if (!as) > > + return NULL; > > + > > + return &as->iotlb; > > +} > > + > > static struct vhost_vdpa_as *vhost_vdpa_alloc_as(struct vhost_vdpa > > *v, u32 asid) { > > struct hlist_head *head = &v->as[asid % > > VHOST_VDPA_IOTLB_BUCKETS]; @@ -83,6 +102,9 @@ static struct vhost_vdpa_as *vhost_vdpa_alloc_as(struct vhost_vdpa *v, u32 asid) > > if (asid_to_as(v, asid)) > > return NULL; > > > > + if (asid >= v->vdpa->nas) > > + return NULL; > > + > > as = kmalloc(sizeof(*as), GFP_KERNEL); > > if (!as) > > return NULL; > > @@ -94,6 +116,17 @@ static struct vhost_vdpa_as *vhost_vdpa_alloc_as(struct vhost_vdpa *v, u32 asid) > > return as; > > } > > > > +static struct vhost_vdpa_as *vhost_vdpa_find_alloc_as(struct vhost_vdpa *v, > > + u32 asid) { > > + struct vhost_vdpa_as *as = asid_to_as(v, asid); > > + > > + if (as) > > + return as; > > + > > + return vhost_vdpa_alloc_as(v, asid); } > > + > > static int vhost_vdpa_remove_as(struct vhost_vdpa *v, u32 asid) { > > struct vhost_vdpa_as *as = asid_to_as(v, asid); @@ -692,6 > > +725,7 @@ static int vhost_vdpa_map(struct vhost_vdpa *v, struct vhost_iotlb *iotlb, > > struct vhost_dev *dev = &v->vdev; > > struct vdpa_device *vdpa = v->vdpa; > > const struct vdpa_config_ops *ops = vdpa->config; > > + u32 asid = iotlb_to_asid(iotlb); > > int r = 0; > > > > r = vhost_iotlb_add_range_ctx(iotlb, iova, iova + size - 1, @@ > > -700,10 +734,10 @@ static int vhost_vdpa_map(struct vhost_vdpa *v, struct vhost_iotlb *iotlb, > > return r; > > > > if (ops->dma_map) { > > - r = ops->dma_map(vdpa, 0, iova, size, pa, perm, opaque); > > + r = ops->dma_map(vdpa, asid, iova, size, pa, perm, > > + opaque); > > } else if (ops->set_map) { > > if (!v->in_batch) > > - r = ops->set_map(vdpa, 0, iotlb); > > + r = ops->set_map(vdpa, asid, iotlb); > > } else { > > r = iommu_map(v->domain, iova, pa, size, > > perm_to_iommu_flags(perm)); @@ -725,17 > > +759,24 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v, { > > struct vdpa_device *vdpa = v->vdpa; > > const struct vdpa_config_ops *ops = vdpa->config; > > + u32 asid = iotlb_to_asid(iotlb); > > > > vhost_vdpa_iotlb_unmap(v, iotlb, iova, iova + size - 1); > > > > if (ops->dma_map) { > > - ops->dma_unmap(vdpa, 0, iova, size); > > + ops->dma_unmap(vdpa, asid, iova, size); > > } else if (ops->set_map) { > > if (!v->in_batch) > > - ops->set_map(vdpa, 0, iotlb); > > + ops->set_map(vdpa, asid, iotlb); > > } else { > > iommu_unmap(v->domain, iova, size); > > } > > + > > + /* If we are in the middle of batch processing, delay the free > > + * of AS until BATCH_END. > > + */ > > + if (!v->in_batch && !iotlb->nmaps) > > + vhost_vdpa_remove_as(v, asid); > > } > > > > static int vhost_vdpa_va_map(struct vhost_vdpa *v, @@ -943,19 +984,38 > > @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid, > > struct vhost_vdpa *v = container_of(dev, struct vhost_vdpa, vdev); > > struct vdpa_device *vdpa = v->vdpa; > > const struct vdpa_config_ops *ops = vdpa->config; > > - struct vhost_vdpa_as *as = asid_to_as(v, 0); > > - struct vhost_iotlb *iotlb = &as->iotlb; > > + struct vhost_iotlb *iotlb = NULL; > > + struct vhost_vdpa_as *as = NULL; > > int r = 0; > > > > - if (asid != 0) > > - return -EINVAL; > > - > > mutex_lock(&dev->mutex); > > > > r = vhost_dev_check_owner(dev); > > if (r) > > goto unlock; > > > > + if (msg->type == VHOST_IOTLB_UPDATE || > > + msg->type == VHOST_IOTLB_BATCH_BEGIN) { > > + as = vhost_vdpa_find_alloc_as(v, asid); > > I wonder if it's better to mandate the ASID to [0, dev->nas), otherwise user space is free to use arbitrary IDs which may exceeds the #address spaces that is supported by the device. > [GD>>] Isn’t the following check in vhost_vdpa_alloc_as () sufficient to ensure ASID's value in the range [0, dev->nas): > if (asid >= v->vdpa->nas) > return NULL; I think you're right. So we are fine. Thanks > > Thanks > > > + if (!as) { > > + dev_err(&v->dev, "can't find and alloc asid %d\n", > > + asid); > > + return -EINVAL; > > + } > > + iotlb = &as->iotlb; > > + } else > > + iotlb = asid_to_iotlb(v, asid); > > + > > + if ((v->in_batch && v->batch_asid != asid) || !iotlb) { > > + if (v->in_batch && v->batch_asid != asid) { > > + dev_info(&v->dev, "batch id %d asid %d\n", > > + v->batch_asid, asid); > > + } > > + if (!iotlb) > > + dev_err(&v->dev, "no iotlb for asid %d\n", asid); > > + return -EINVAL; > > + } > > + > > switch (msg->type) { > > case VHOST_IOTLB_UPDATE: > > r = vhost_vdpa_process_iotlb_update(v, iotlb, msg); @@ > > -964,12 +1024,15 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid, > > vhost_vdpa_unmap(v, iotlb, msg->iova, msg->size); > > break; > > case VHOST_IOTLB_BATCH_BEGIN: > > + v->batch_asid = asid; > > v->in_batch = true; > > break; > > case VHOST_IOTLB_BATCH_END: > > if (v->in_batch && ops->set_map) > > - ops->set_map(vdpa, 0, iotlb); > > + ops->set_map(vdpa, asid, iotlb); > > v->in_batch = false; > > + if (!iotlb->nmaps) > > + vhost_vdpa_remove_as(v, asid); > > break; > > default: > > r = -EINVAL; > > @@ -1057,9 +1120,17 @@ static void vhost_vdpa_set_iova_range(struct > > vhost_vdpa *v) > > > > static void vhost_vdpa_cleanup(struct vhost_vdpa *v) { > > + struct vhost_vdpa_as *as; > > + u32 asid; > > + > > vhost_dev_cleanup(&v->vdev); > > kfree(v->vdev.vqs); > > - vhost_vdpa_remove_as(v, 0); > > + > > + for (asid = 0; asid < v->vdpa->nas; asid++) { > > + as = asid_to_as(v, asid); > > + if (as) > > + vhost_vdpa_remove_as(v, asid); > > + } > > } > > > > static int vhost_vdpa_open(struct inode *inode, struct file *filep) > > @@ -1095,12 +1166,9 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep) > > vhost_dev_init(dev, vqs, nvqs, 0, 0, 0, false, > > vhost_vdpa_process_iotlb_msg); > > > > - if (!vhost_vdpa_alloc_as(v, 0)) > > - goto err_alloc_as; > > - > > r = vhost_vdpa_alloc_domain(v); > > if (r) > > - goto err_alloc_as; > > + goto err_alloc_domain; > > > > vhost_vdpa_set_iova_range(v); > > > > @@ -1108,7 +1176,7 @@ static int vhost_vdpa_open(struct inode *inode, > > struct file *filep) > > > > return 0; > > > > -err_alloc_as: > > +err_alloc_domain: > > vhost_vdpa_cleanup(v); > > err: > > atomic_dec(&v->opened); > > @@ -1233,8 +1301,11 @@ static int vhost_vdpa_probe(struct vdpa_device *vdpa) > > int minor; > > int i, r; > > > > - /* Only support 1 address space and 1 groups */ > > - if (vdpa->ngroups != 1 || vdpa->nas != 1) > > + /* We can't support platform IOMMU device with more than 1 > > + * group or as > > + */ > > + if (!ops->set_map && !ops->dma_map && > > + (vdpa->ngroups > 1 || vdpa->nas > 1)) > > return -EOPNOTSUPP; > > > > v = kzalloc(sizeof(*v), GFP_KERNEL | __GFP_RETRY_MAYFAIL); > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index > > d1e58f976f6e..5022c648d9c0 100644 > > --- a/drivers/vhost/vhost.c > > +++ b/drivers/vhost/vhost.c > > @@ -1167,7 +1167,7 @@ ssize_t vhost_chr_write_iter(struct vhost_dev *dev, > > ret = -EINVAL; > > goto done; > > } > > - offset = sizeof(__u16); > > + offset = 0; > > } else > > offset = sizeof(__u32); > > break; > > -- > > 2.30.1 > > > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 00/19] Control VQ support in vDPA [not found] <20220330180436.24644-1-gdawar@xilinx.com> [not found] ` <20220330180436.24644-16-gdawar@xilinx.com> @ 2022-05-09 3:42 ` Jason Wang 2022-05-09 7:07 ` Michael S. Tsirkin [not found] ` <20220330180436.24644-20-gdawar@xilinx.com> 2 siblings, 1 reply; 6+ messages in thread From: Jason Wang @ 2022-05-09 3:42 UTC (permalink / raw) To: Michael S. Tsirkin Cc: tanuj.kamde, kvm, virtualization, Wu Zongyong, Si-Wei Liu, Pablo Cascon Katchadourian, Eli Cohen, Zhang Min, Xie Yongji, Martin Petrus Hubertus Habets, eperezma, Dinan Gunawardena, habetsm.xilinx, Longpeng, Dan Carpenter, Gautam Dawar, Christophe JAILLET, Gautam Dawar, netdev, linux-kernel, ecree.xilinx, Harpreet Singh Anand, Martin Porter, Zhu Lingshan On Thu, Mar 31, 2022 at 2:05 AM Gautam Dawar <gautam.dawar@xilinx.com> wrote: > > Hi All: > > This series tries to add the support for control virtqueue in vDPA. > > Control virtqueue is used by networking device for accepting various > commands from the driver. It's a must to support multiqueue and other > configurations. > > When used by vhost-vDPA bus driver for VM, the control virtqueue > should be shadowed via userspace VMM (Qemu) instead of being assigned > directly to Guest. This is because Qemu needs to know the device state > in order to start and stop device correctly (e.g for Live Migration). > > This requies to isolate the memory mapping for control virtqueue > presented by vhost-vDPA to prevent guest from accessing it directly. > > To achieve this, vDPA introduce two new abstractions: > > - address space: identified through address space id (ASID) and a set > of memory mapping in maintained > - virtqueue group: the minimal set of virtqueues that must share an > address space > > Device needs to advertise the following attributes to vDPA: > > - the number of address spaces supported in the device > - the number of virtqueue groups supported in the device > - the mappings from a specific virtqueue to its virtqueue groups > > The mappings from virtqueue to virtqueue groups is fixed and defined > by vDPA device driver. E.g: > > - For the device that has hardware ASID support, it can simply > advertise a per virtqueue group. > - For the device that does not have hardware ASID support, it can > simply advertise a single virtqueue group that contains all > virtqueues. Or if it wants a software emulated control virtqueue, it > can advertise two virtqueue groups, one is for cvq, another is for > the rest virtqueues. > > vDPA also allow to change the association between virtqueue group and > address space. So in the case of control virtqueue, userspace > VMM(Qemu) may use a dedicated address space for the control virtqueue > group to isolate the memory mapping. > > The vhost/vhost-vDPA is also extend for the userspace to: > > - query the number of virtqueue groups and address spaces supported by > the device > - query the virtqueue group for a specific virtqueue > - assocaite a virtqueue group with an address space > - send ASID based IOTLB commands > > This will help userspace VMM(Qemu) to detect whether the control vq > could be supported and isolate memory mappings of control virtqueue > from the others. > > To demonstrate the usage, vDPA simulator is extended to support > setting MAC address via a emulated control virtqueue. > > Please review. Michael, this looks good to me, do you have comments on this? Thanks > > Changes since RFC v2: > > - Fixed memory leak for asid 0 in vhost_vdpa_remove_as() > - Removed unnecessary NULL check for iotlb in vhost_vdpa_unmap() and > changed its return type to void. > - Removed insignificant used_as member field from struct vhost_vdpa. > - Corrected the iommu parameter in call to vringh_set_iotlb() from > vdpasim_set_group_asid() > - Fixed build errors with vdpa_sim_net > - Updated alibaba, vdpa_user and virtio_pci vdpa parent drivers to > call updated vDPA APIs and ensured successful build > - Tested control (MAC address configuration) and data-path using > single virtqueue pair on Xilinx (now AMD) SN1022 SmartNIC device > and vdpa_sim_net software device using QEMU release at [1] > - Removed two extra blank lines after set_group_asid() in > include/linux/vdpa.h > > Changes since v1: > > - Rebased the v1 patch series on vhost branch of MST vhost git repo > git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost > - Updates to accommodate vdpa_sim changes from monolithic module in > kernel used v1 patch series to current modularized class (net, block) > based approach. > - Added new attributes (ngroups and nas) to "vdpasim_dev_attr" and > propagated them from vdpa_sim_net to vdpa_sim > - Widened the data-type for "asid" member of vhost_msg_v2 to __u32 > to accommodate PASID > - Fixed the buildbot warnings > - Resolved all checkpatch.pl errors and warnings > - Tested both control and datapath with Xilinx Smartnic SN1000 series > device using QEMU implementing the Shadow virtqueue and support for > VQ groups and ASID available at [1] > > Changes since RFC: > > - tweak vhost uAPI documentation > - switch to use device specific IOTLB really in patch 4 > - tweak the commit log > - fix that ASID in vhost is claimed to be 32 actually but 16bit > actually > - fix use after free when using ASID with IOTLB batching requests > - switch to use Stefano's patch for having separated iov > - remove unused "used_as" variable > - fix the iotlb/asid checking in vhost_vdpa_unmap() > > [1] Development QEMU release with support for SVQ, VQ groups and ASID: > github.com/eugpermar/qemu/releases/tag/vdpa_sw_live_migration.d%2F > asid_groups-v1.d%2F00 > > Thanks > > Gautam Dawar (19): > vhost: move the backend feature bits to vhost_types.h > virtio-vdpa: don't set callback if virtio doesn't need it > vhost-vdpa: passing iotlb to IOMMU mapping helpers > vhost-vdpa: switch to use vhost-vdpa specific IOTLB > vdpa: introduce virtqueue groups > vdpa: multiple address spaces support > vdpa: introduce config operations for associating ASID to a virtqueue > group > vhost_iotlb: split out IOTLB initialization > vhost: support ASID in IOTLB API > vhost-vdpa: introduce asid based IOTLB > vhost-vdpa: introduce uAPI to get the number of virtqueue groups > vhost-vdpa: introduce uAPI to get the number of address spaces > vhost-vdpa: uAPI to get virtqueue group id > vhost-vdpa: introduce uAPI to set group ASID > vhost-vdpa: support ASID based IOTLB API > vdpa_sim: advertise VIRTIO_NET_F_MTU > vdpa_sim: factor out buffer completion logic > vdpa_sim: filter destination mac address > vdpasim: control virtqueue support > > drivers/vdpa/alibaba/eni_vdpa.c | 2 +- > drivers/vdpa/ifcvf/ifcvf_main.c | 8 +- > drivers/vdpa/mlx5/net/mlx5_vnet.c | 11 +- > drivers/vdpa/vdpa.c | 5 + > drivers/vdpa/vdpa_sim/vdpa_sim.c | 100 ++++++++-- > drivers/vdpa/vdpa_sim/vdpa_sim.h | 3 + > drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 169 +++++++++++++---- > drivers/vdpa/vdpa_user/vduse_dev.c | 3 +- > drivers/vdpa/virtio_pci/vp_vdpa.c | 2 +- > drivers/vhost/iotlb.c | 23 ++- > drivers/vhost/vdpa.c | 267 +++++++++++++++++++++------ > drivers/vhost/vhost.c | 23 ++- > drivers/vhost/vhost.h | 4 +- > drivers/virtio/virtio_vdpa.c | 2 +- > include/linux/vdpa.h | 44 ++++- > include/linux/vhost_iotlb.h | 2 + > include/uapi/linux/vhost.h | 26 ++- > include/uapi/linux/vhost_types.h | 11 +- > 18 files changed, 563 insertions(+), 142 deletions(-) > > -- > 2.30.1 > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 00/19] Control VQ support in vDPA 2022-05-09 3:42 ` [PATCH v2 00/19] Control VQ support in vDPA Jason Wang @ 2022-05-09 7:07 ` Michael S. Tsirkin 0 siblings, 0 replies; 6+ messages in thread From: Michael S. Tsirkin @ 2022-05-09 7:07 UTC (permalink / raw) To: Jason Wang Cc: tanuj.kamde, kvm, virtualization, Wu Zongyong, Si-Wei Liu, Pablo Cascon Katchadourian, Eli Cohen, Zhang Min, Xie Yongji, Martin Petrus Hubertus Habets, eperezma, Dinan Gunawardena, habetsm.xilinx, Longpeng, Dan Carpenter, Gautam Dawar, Christophe JAILLET, Gautam Dawar, netdev, linux-kernel, ecree.xilinx, Harpreet Singh Anand, Martin Porter, Zhu Lingshan On Mon, May 09, 2022 at 11:42:10AM +0800, Jason Wang wrote: > On Thu, Mar 31, 2022 at 2:05 AM Gautam Dawar <gautam.dawar@xilinx.com> wrote: > > > > Hi All: > > > > This series tries to add the support for control virtqueue in vDPA. > > > > Control virtqueue is used by networking device for accepting various > > commands from the driver. It's a must to support multiqueue and other > > configurations. > > > > When used by vhost-vDPA bus driver for VM, the control virtqueue > > should be shadowed via userspace VMM (Qemu) instead of being assigned > > directly to Guest. This is because Qemu needs to know the device state > > in order to start and stop device correctly (e.g for Live Migration). > > > > This requies to isolate the memory mapping for control virtqueue > > presented by vhost-vDPA to prevent guest from accessing it directly. > > > > To achieve this, vDPA introduce two new abstractions: > > > > - address space: identified through address space id (ASID) and a set > > of memory mapping in maintained > > - virtqueue group: the minimal set of virtqueues that must share an > > address space > > > > Device needs to advertise the following attributes to vDPA: > > > > - the number of address spaces supported in the device > > - the number of virtqueue groups supported in the device > > - the mappings from a specific virtqueue to its virtqueue groups > > > > The mappings from virtqueue to virtqueue groups is fixed and defined > > by vDPA device driver. E.g: > > > > - For the device that has hardware ASID support, it can simply > > advertise a per virtqueue group. > > - For the device that does not have hardware ASID support, it can > > simply advertise a single virtqueue group that contains all > > virtqueues. Or if it wants a software emulated control virtqueue, it > > can advertise two virtqueue groups, one is for cvq, another is for > > the rest virtqueues. > > > > vDPA also allow to change the association between virtqueue group and > > address space. So in the case of control virtqueue, userspace > > VMM(Qemu) may use a dedicated address space for the control virtqueue > > group to isolate the memory mapping. > > > > The vhost/vhost-vDPA is also extend for the userspace to: > > > > - query the number of virtqueue groups and address spaces supported by > > the device > > - query the virtqueue group for a specific virtqueue > > - assocaite a virtqueue group with an address space > > - send ASID based IOTLB commands > > > > This will help userspace VMM(Qemu) to detect whether the control vq > > could be supported and isolate memory mappings of control virtqueue > > from the others. > > > > To demonstrate the usage, vDPA simulator is extended to support > > setting MAC address via a emulated control virtqueue. > > > > Please review. > > Michael, this looks good to me, do you have comments on this? > > Thanks I'll merge this for next. > > > > Changes since RFC v2: > > > > - Fixed memory leak for asid 0 in vhost_vdpa_remove_as() > > - Removed unnecessary NULL check for iotlb in vhost_vdpa_unmap() and > > changed its return type to void. > > - Removed insignificant used_as member field from struct vhost_vdpa. > > - Corrected the iommu parameter in call to vringh_set_iotlb() from > > vdpasim_set_group_asid() > > - Fixed build errors with vdpa_sim_net > > - Updated alibaba, vdpa_user and virtio_pci vdpa parent drivers to > > call updated vDPA APIs and ensured successful build > > - Tested control (MAC address configuration) and data-path using > > single virtqueue pair on Xilinx (now AMD) SN1022 SmartNIC device > > and vdpa_sim_net software device using QEMU release at [1] > > - Removed two extra blank lines after set_group_asid() in > > include/linux/vdpa.h > > > > Changes since v1: > > > > - Rebased the v1 patch series on vhost branch of MST vhost git repo > > git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost > > - Updates to accommodate vdpa_sim changes from monolithic module in > > kernel used v1 patch series to current modularized class (net, block) > > based approach. > > - Added new attributes (ngroups and nas) to "vdpasim_dev_attr" and > > propagated them from vdpa_sim_net to vdpa_sim > > - Widened the data-type for "asid" member of vhost_msg_v2 to __u32 > > to accommodate PASID > > - Fixed the buildbot warnings > > - Resolved all checkpatch.pl errors and warnings > > - Tested both control and datapath with Xilinx Smartnic SN1000 series > > device using QEMU implementing the Shadow virtqueue and support for > > VQ groups and ASID available at [1] > > > > Changes since RFC: > > > > - tweak vhost uAPI documentation > > - switch to use device specific IOTLB really in patch 4 > > - tweak the commit log > > - fix that ASID in vhost is claimed to be 32 actually but 16bit > > actually > > - fix use after free when using ASID with IOTLB batching requests > > - switch to use Stefano's patch for having separated iov > > - remove unused "used_as" variable > > - fix the iotlb/asid checking in vhost_vdpa_unmap() > > > > [1] Development QEMU release with support for SVQ, VQ groups and ASID: > > github.com/eugpermar/qemu/releases/tag/vdpa_sw_live_migration.d%2F > > asid_groups-v1.d%2F00 > > > > Thanks > > > > Gautam Dawar (19): > > vhost: move the backend feature bits to vhost_types.h > > virtio-vdpa: don't set callback if virtio doesn't need it > > vhost-vdpa: passing iotlb to IOMMU mapping helpers > > vhost-vdpa: switch to use vhost-vdpa specific IOTLB > > vdpa: introduce virtqueue groups > > vdpa: multiple address spaces support > > vdpa: introduce config operations for associating ASID to a virtqueue > > group > > vhost_iotlb: split out IOTLB initialization > > vhost: support ASID in IOTLB API > > vhost-vdpa: introduce asid based IOTLB > > vhost-vdpa: introduce uAPI to get the number of virtqueue groups > > vhost-vdpa: introduce uAPI to get the number of address spaces > > vhost-vdpa: uAPI to get virtqueue group id > > vhost-vdpa: introduce uAPI to set group ASID > > vhost-vdpa: support ASID based IOTLB API > > vdpa_sim: advertise VIRTIO_NET_F_MTU > > vdpa_sim: factor out buffer completion logic > > vdpa_sim: filter destination mac address > > vdpasim: control virtqueue support > > > > drivers/vdpa/alibaba/eni_vdpa.c | 2 +- > > drivers/vdpa/ifcvf/ifcvf_main.c | 8 +- > > drivers/vdpa/mlx5/net/mlx5_vnet.c | 11 +- > > drivers/vdpa/vdpa.c | 5 + > > drivers/vdpa/vdpa_sim/vdpa_sim.c | 100 ++++++++-- > > drivers/vdpa/vdpa_sim/vdpa_sim.h | 3 + > > drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 169 +++++++++++++---- > > drivers/vdpa/vdpa_user/vduse_dev.c | 3 +- > > drivers/vdpa/virtio_pci/vp_vdpa.c | 2 +- > > drivers/vhost/iotlb.c | 23 ++- > > drivers/vhost/vdpa.c | 267 +++++++++++++++++++++------ > > drivers/vhost/vhost.c | 23 ++- > > drivers/vhost/vhost.h | 4 +- > > drivers/virtio/virtio_vdpa.c | 2 +- > > include/linux/vdpa.h | 44 ++++- > > include/linux/vhost_iotlb.h | 2 + > > include/uapi/linux/vhost.h | 26 ++- > > include/uapi/linux/vhost_types.h | 11 +- > > 18 files changed, 563 insertions(+), 142 deletions(-) > > > > -- > > 2.30.1 > > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <20220330180436.24644-20-gdawar@xilinx.com>]
* Re: [PATCH v2 19/19] vdpasim: control virtqueue support [not found] ` <20220330180436.24644-20-gdawar@xilinx.com> @ 2022-06-21 15:19 ` Stefano Garzarella [not found] ` <CAJaqyWd8MR9vTRcCTktzC3VL054x5H5_sXy+MLVNewFDkjQUSw@mail.gmail.com> 0 siblings, 1 reply; 6+ messages in thread From: Stefano Garzarella @ 2022-06-21 15:19 UTC (permalink / raw) To: Gautam Dawar, Jason Wang Cc: Kamde, Tanuj, kvm, Michael S. Tsirkin, Linux Virtualization, Wu Zongyong, pabloc, Eli Cohen, Zhang Min, Eugenio Perez Martin, Martin Petrus Hubertus Habets, Xie Yongji, dinang, habetsm.xilinx, Longpeng, Dan Carpenter, Gautam Dawar, Christophe JAILLET, netdev, kernel list, ecree.xilinx, Harpreet Singh Anand, martinpo, Zhu Lingshan Hi Gautam, On Wed, Mar 30, 2022 at 8:21 PM Gautam Dawar <gautam.dawar@xilinx.com> wrote: > > This patch introduces the control virtqueue support for vDPA > simulator. This is a requirement for supporting advanced features like > multiqueue. > > A requirement for control virtqueue is to isolate its memory access > from the rx/tx virtqueues. This is because when using vDPA device > for VM, the control virqueue is not directly assigned to VM. Userspace > (Qemu) will present a shadow control virtqueue to control for > recording the device states. > > The isolation is done via the virtqueue groups and ASID support in > vDPA through vhost-vdpa. The simulator is extended to have: > > 1) three virtqueues: RXVQ, TXVQ and CVQ (control virtqueue) > 2) two virtqueue groups: group 0 contains RXVQ and TXVQ; group 1 > contains CVQ > 3) two address spaces and the simulator simply implements the address > spaces by mapping it 1:1 to IOTLB. > > For the VM use cases, userspace(Qemu) may set AS 0 to group 0 and AS 1 > to group 1. So we have: > > 1) The IOTLB for virtqueue group 0 contains the mappings of guest, so > RX and TX can be assigned to guest directly. > 2) The IOTLB for virtqueue group 1 contains the mappings of CVQ which > is the buffers that allocated and managed by VMM only. So CVQ of > vhost-vdpa is visible to VMM only. And Guest can not access the CVQ > of vhost-vdpa. > > For the other use cases, since AS 0 is associated to all virtqueue > groups by default. All virtqueues share the same mapping by default. > > To demonstrate the function, VIRITO_NET_F_CTRL_MACADDR is > implemented in the simulator for the driver to set mac address. > > Signed-off-by: Jason Wang <jasowang@redhat.com> > Signed-off-by: Gautam Dawar <gdawar@xilinx.com> > --- > drivers/vdpa/vdpa_sim/vdpa_sim.c | 91 ++++++++++++++++++++++------ > drivers/vdpa/vdpa_sim/vdpa_sim.h | 2 + > drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 88 ++++++++++++++++++++++++++- > 3 files changed, 161 insertions(+), 20 deletions(-) > > diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c > index 659e2e2e4b0c..51bd0bafce06 100644 > --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c > +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c > @@ -96,11 +96,17 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim) > { > int i; > > - for (i = 0; i < vdpasim->dev_attr.nvqs; i++) > + spin_lock(&vdpasim->iommu_lock); > + > + for (i = 0; i < vdpasim->dev_attr.nvqs; i++) { > vdpasim_vq_reset(vdpasim, &vdpasim->vqs[i]); > + vringh_set_iotlb(&vdpasim->vqs[i].vring, &vdpasim->iommu[0], > + &vdpasim->iommu_lock); > + } > + > + for (i = 0; i < vdpasim->dev_attr.nas; i++) > + vhost_iotlb_reset(&vdpasim->iommu[i]); > > - spin_lock(&vdpasim->iommu_lock); > - vhost_iotlb_reset(vdpasim->iommu); > spin_unlock(&vdpasim->iommu_lock); > > vdpasim->features = 0; > @@ -145,7 +151,7 @@ static dma_addr_t vdpasim_map_range(struct vdpasim *vdpasim, phys_addr_t paddr, > dma_addr = iova_dma_addr(&vdpasim->iova, iova); > > spin_lock(&vdpasim->iommu_lock); > - ret = vhost_iotlb_add_range(vdpasim->iommu, (u64)dma_addr, > + ret = vhost_iotlb_add_range(&vdpasim->iommu[0], (u64)dma_addr, > (u64)dma_addr + size - 1, (u64)paddr, perm); > spin_unlock(&vdpasim->iommu_lock); > > @@ -161,7 +167,7 @@ static void vdpasim_unmap_range(struct vdpasim *vdpasim, dma_addr_t dma_addr, > size_t size) > { > spin_lock(&vdpasim->iommu_lock); > - vhost_iotlb_del_range(vdpasim->iommu, (u64)dma_addr, > + vhost_iotlb_del_range(&vdpasim->iommu[0], (u64)dma_addr, > (u64)dma_addr + size - 1); > spin_unlock(&vdpasim->iommu_lock); > > @@ -250,8 +256,9 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr) > else > ops = &vdpasim_config_ops; > > - vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, 1, > - 1, dev_attr->name, false); > + vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, > + dev_attr->ngroups, dev_attr->nas, > + dev_attr->name, false); > if (IS_ERR(vdpasim)) { > ret = PTR_ERR(vdpasim); > goto err_alloc; > @@ -278,16 +285,20 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr) > if (!vdpasim->vqs) > goto err_iommu; > > - vdpasim->iommu = vhost_iotlb_alloc(max_iotlb_entries, 0); > + vdpasim->iommu = kmalloc_array(vdpasim->dev_attr.nas, > + sizeof(*vdpasim->iommu), GFP_KERNEL); > if (!vdpasim->iommu) > goto err_iommu; > > + for (i = 0; i < vdpasim->dev_attr.nas; i++) > + vhost_iotlb_init(&vdpasim->iommu[i], 0, 0); > + > vdpasim->buffer = kvmalloc(dev_attr->buffer_size, GFP_KERNEL); > if (!vdpasim->buffer) > goto err_iommu; > > for (i = 0; i < dev_attr->nvqs; i++) > - vringh_set_iotlb(&vdpasim->vqs[i].vring, vdpasim->iommu, > + vringh_set_iotlb(&vdpasim->vqs[i].vring, &vdpasim->iommu[0], > &vdpasim->iommu_lock); > > ret = iova_cache_get(); > @@ -401,7 +412,11 @@ static u32 vdpasim_get_vq_align(struct vdpa_device *vdpa) > > static u32 vdpasim_get_vq_group(struct vdpa_device *vdpa, u16 idx) > { > - return 0; > + /* RX and TX belongs to group 0, CVQ belongs to group 1 */ > + if (idx == 2) > + return 1; > + else > + return 0; This code only works for the vDPA-net simulator, since vdpasim_get_vq_group() is also shared with other simulators (e.g. vdpa_sim_blk), should we move this net-specific code into vdpa_sim_net.c, maybe adding a callback implemented by the different simulators? Thanks, Stefano _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <CAJaqyWd8MR9vTRcCTktzC3VL054x5H5_sXy+MLVNewFDkjQUSw@mail.gmail.com>]
[parent not found: <CAJaqyWc36adK-gUzc8tMgDDe5SoBPy7xN-UtcFA4=aDezdJ5LA@mail.gmail.com>]
* Re: [PATCH v2 19/19] vdpasim: control virtqueue support [not found] ` <CAJaqyWc36adK-gUzc8tMgDDe5SoBPy7xN-UtcFA4=aDezdJ5LA@mail.gmail.com> @ 2022-06-22 15:44 ` Stefano Garzarella 0 siblings, 0 replies; 6+ messages in thread From: Stefano Garzarella @ 2022-06-22 15:44 UTC (permalink / raw) To: Eugenio Perez Martin Cc: Kamde, Tanuj, kvm, Michael S. Tsirkin, Linux Virtualization, Wu Zongyong, Pablo Cascon Katchadourian, Eli Cohen, Zhang Min, Martin Petrus Hubertus Habets, Xie Yongji, Dinan Gunawardena, habetsm.xilinx, Longpeng, Dan Carpenter, Gautam Dawar, Christophe JAILLET, Gautam Dawar, netdev, kernel list, ecree.xilinx, Harpreet Singh Anand, Martin Porter, Zhu Lingshan On Wed, Jun 22, 2022 at 05:04:44PM +0200, Eugenio Perez Martin wrote: >On Wed, Jun 22, 2022 at 12:21 PM Eugenio Perez Martin ><eperezma@redhat.com> wrote: >> >> On Tue, Jun 21, 2022 at 5:20 PM Stefano Garzarella <sgarzare@redhat.com> wrote: >> > >> > Hi Gautam, >> > >> > On Wed, Mar 30, 2022 at 8:21 PM Gautam Dawar <gautam.dawar@xilinx.com> wrote: >> > > >> > > This patch introduces the control virtqueue support for vDPA >> > > simulator. This is a requirement for supporting advanced features like >> > > multiqueue. >> > > >> > > A requirement for control virtqueue is to isolate its memory access >> > > from the rx/tx virtqueues. This is because when using vDPA device >> > > for VM, the control virqueue is not directly assigned to VM. Userspace >> > > (Qemu) will present a shadow control virtqueue to control for >> > > recording the device states. >> > > >> > > The isolation is done via the virtqueue groups and ASID support in >> > > vDPA through vhost-vdpa. The simulator is extended to have: >> > > >> > > 1) three virtqueues: RXVQ, TXVQ and CVQ (control virtqueue) >> > > 2) two virtqueue groups: group 0 contains RXVQ and TXVQ; group 1 >> > > contains CVQ >> > > 3) two address spaces and the simulator simply implements the address >> > > spaces by mapping it 1:1 to IOTLB. >> > > >> > > For the VM use cases, userspace(Qemu) may set AS 0 to group 0 and AS 1 >> > > to group 1. So we have: >> > > >> > > 1) The IOTLB for virtqueue group 0 contains the mappings of guest, so >> > > RX and TX can be assigned to guest directly. >> > > 2) The IOTLB for virtqueue group 1 contains the mappings of CVQ which >> > > is the buffers that allocated and managed by VMM only. So CVQ of >> > > vhost-vdpa is visible to VMM only. And Guest can not access the CVQ >> > > of vhost-vdpa. >> > > >> > > For the other use cases, since AS 0 is associated to all virtqueue >> > > groups by default. All virtqueues share the same mapping by default. >> > > >> > > To demonstrate the function, VIRITO_NET_F_CTRL_MACADDR is >> > > implemented in the simulator for the driver to set mac address. >> > > >> > > Signed-off-by: Jason Wang <jasowang@redhat.com> >> > > Signed-off-by: Gautam Dawar <gdawar@xilinx.com> >> > > --- >> > > drivers/vdpa/vdpa_sim/vdpa_sim.c | 91 ++++++++++++++++++++++------ >> > > drivers/vdpa/vdpa_sim/vdpa_sim.h | 2 + >> > > drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 88 ++++++++++++++++++++++++++- >> > > 3 files changed, 161 insertions(+), 20 deletions(-) >> > > >> > > diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c >> > > index 659e2e2e4b0c..51bd0bafce06 100644 >> > > --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c >> > > +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c >> > > @@ -96,11 +96,17 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim) >> > > { >> > > int i; >> > > >> > > - for (i = 0; i < vdpasim->dev_attr.nvqs; i++) >> > > + spin_lock(&vdpasim->iommu_lock); >> > > + >> > > + for (i = 0; i < vdpasim->dev_attr.nvqs; i++) { >> > > vdpasim_vq_reset(vdpasim, &vdpasim->vqs[i]); >> > > + vringh_set_iotlb(&vdpasim->vqs[i].vring, &vdpasim->iommu[0], >> > > + &vdpasim->iommu_lock); >> > > + } >> > > + >> > > + for (i = 0; i < vdpasim->dev_attr.nas; i++) >> > > + vhost_iotlb_reset(&vdpasim->iommu[i]); >> > > >> > > - spin_lock(&vdpasim->iommu_lock); >> > > - vhost_iotlb_reset(vdpasim->iommu); >> > > spin_unlock(&vdpasim->iommu_lock); >> > > >> > > vdpasim->features = 0; >> > > @@ -145,7 +151,7 @@ static dma_addr_t vdpasim_map_range(struct vdpasim *vdpasim, phys_addr_t paddr, >> > > dma_addr = iova_dma_addr(&vdpasim->iova, iova); >> > > >> > > spin_lock(&vdpasim->iommu_lock); >> > > - ret = vhost_iotlb_add_range(vdpasim->iommu, (u64)dma_addr, >> > > + ret = vhost_iotlb_add_range(&vdpasim->iommu[0], (u64)dma_addr, >> > > (u64)dma_addr + size - 1, (u64)paddr, perm); >> > > spin_unlock(&vdpasim->iommu_lock); >> > > >> > > @@ -161,7 +167,7 @@ static void vdpasim_unmap_range(struct vdpasim *vdpasim, dma_addr_t dma_addr, >> > > size_t size) >> > > { >> > > spin_lock(&vdpasim->iommu_lock); >> > > - vhost_iotlb_del_range(vdpasim->iommu, (u64)dma_addr, >> > > + vhost_iotlb_del_range(&vdpasim->iommu[0], (u64)dma_addr, >> > > (u64)dma_addr + size - 1); >> > > spin_unlock(&vdpasim->iommu_lock); >> > > >> > > @@ -250,8 +256,9 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr) >> > > else >> > > ops = &vdpasim_config_ops; >> > > >> > > - vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, 1, >> > > - 1, dev_attr->name, false); >> > > + vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, >> > > + dev_attr->ngroups, dev_attr->nas, >> > > + dev_attr->name, false); >> > > if (IS_ERR(vdpasim)) { >> > > ret = PTR_ERR(vdpasim); >> > > goto err_alloc; >> > > @@ -278,16 +285,20 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr) >> > > if (!vdpasim->vqs) >> > > goto err_iommu; >> > > >> > > - vdpasim->iommu = vhost_iotlb_alloc(max_iotlb_entries, 0); >> > > + vdpasim->iommu = kmalloc_array(vdpasim->dev_attr.nas, >> > > + sizeof(*vdpasim->iommu), GFP_KERNEL); >> > > if (!vdpasim->iommu) >> > > goto err_iommu; >> > > >> > > + for (i = 0; i < vdpasim->dev_attr.nas; i++) >> > > + vhost_iotlb_init(&vdpasim->iommu[i], 0, 0); >> > > + >> > > vdpasim->buffer = kvmalloc(dev_attr->buffer_size, GFP_KERNEL); >> > > if (!vdpasim->buffer) >> > > goto err_iommu; >> > > >> > > for (i = 0; i < dev_attr->nvqs; i++) >> > > - vringh_set_iotlb(&vdpasim->vqs[i].vring, vdpasim->iommu, >> > > + vringh_set_iotlb(&vdpasim->vqs[i].vring, &vdpasim->iommu[0], >> > > &vdpasim->iommu_lock); >> > > >> > > ret = iova_cache_get(); >> > > @@ -401,7 +412,11 @@ static u32 vdpasim_get_vq_align(struct vdpa_device *vdpa) >> > > >> > > static u32 vdpasim_get_vq_group(struct vdpa_device *vdpa, u16 idx) >> > > { >> > > - return 0; >> > > + /* RX and TX belongs to group 0, CVQ belongs to group 1 */ >> > > + if (idx == 2) >> > > + return 1; >> > > + else >> > > + return 0; >> > >> > This code only works for the vDPA-net simulator, since >> > vdpasim_get_vq_group() is also shared with other simulators (e.g. >> > vdpa_sim_blk), >> >> That's totally right. >> >> > should we move this net-specific code into >> > vdpa_sim_net.c, maybe adding a callback implemented by the different >> > simulators? >> > >> >> At this moment, VDPASIM_BLK_VQ_NUM is fixed to 1, so maybe the right >> thing to do for the -rc phase is to check if idx > vdpasim.attr.nvqs? >> It's a more general fix. >> > >Actually, that is already checked by vhost/vdpa.c. > >Taking that into account, is it worth introducing the change for 5.19? >I'm totally ok with the change for 5.20. > >Thanks! > >> For the general case, yes, a callback should be issued to the actual >> simulator so it's not a surprise when VDPASIM_BLK_VQ_NUM increases, >> either dynamically or by anyone testing it. Exactly, since those parameters are not yet configurable at runtime (someday I hope they will be), I often recompile the module by changing them, so for me we should fix them in 5.19. Obviously it's an advanced case, and I expect that if someone recompiles the module changing some hardwired thing, they can expect to have to change something else as well. So, I'm also fine with leaving it that way for 5.19, but if you want I can fix it earlier. Thanks, Stefano _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-06-22 15:44 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20220330180436.24644-1-gdawar@xilinx.com>
[not found] ` <20220330180436.24644-16-gdawar@xilinx.com>
2022-04-01 4:24 ` [PATCH v2 15/19] vhost-vdpa: support ASID based IOTLB API Jason Wang
[not found] ` <BY5PR02MB698077E814EC867CEBAE2211B1FD9@BY5PR02MB6980.namprd02.prod.outlook.com>
2022-05-07 10:24 ` Jason Wang
2022-05-09 3:42 ` [PATCH v2 00/19] Control VQ support in vDPA Jason Wang
2022-05-09 7:07 ` Michael S. Tsirkin
[not found] ` <20220330180436.24644-20-gdawar@xilinx.com>
2022-06-21 15:19 ` [PATCH v2 19/19] vdpasim: control virtqueue support Stefano Garzarella
[not found] ` <CAJaqyWd8MR9vTRcCTktzC3VL054x5H5_sXy+MLVNewFDkjQUSw@mail.gmail.com>
[not found] ` <CAJaqyWc36adK-gUzc8tMgDDe5SoBPy7xN-UtcFA4=aDezdJ5LA@mail.gmail.com>
2022-06-22 15:44 ` Stefano Garzarella
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).