* Re: [PATCH v9 07/17] virtio: Don't set FAILED status bit on device index allocation failure [not found] ` <20210713084656.232-8-xieyongji@bytedance.com> @ 2021-07-13 11:02 ` Dan Carpenter 0 siblings, 0 replies; 18+ messages in thread From: Dan Carpenter @ 2021-07-13 11:02 UTC (permalink / raw) To: Xie Yongji Cc: kvm, mst, virtualization, christian.brauner, corbet, joro, willy, hch, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, gregkh, rdunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, mika.penttila On Tue, Jul 13, 2021 at 04:46:46PM +0800, Xie Yongji wrote: > We don't need to set FAILED status bit on device index allocation > failure since the device initialization hasn't been started yet. The commit message should say what the effect of this change is to the user. Is this a bugfix? Will it have any effect on runtime at all? To me, hearing your thoughts on this is valuable even if you have to guess. "I noticed this mistake during review and I don't think it will affect runtime." regards, dan carpenter _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20210713084656.232-14-xieyongji@bytedance.com>]
* Re: [PATCH v9 13/17] vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap() [not found] ` <20210713084656.232-14-xieyongji@bytedance.com> @ 2021-07-13 11:31 ` Dan Carpenter 2021-07-14 2:14 ` Jason Wang 0 siblings, 1 reply; 18+ messages in thread From: Dan Carpenter @ 2021-07-13 11:31 UTC (permalink / raw) To: Xie Yongji Cc: kvm, mst, virtualization, christian.brauner, corbet, joro, willy, hch, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, gregkh, rdunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, mika.penttila On Tue, Jul 13, 2021 at 04:46:52PM +0800, Xie Yongji wrote: > @@ -613,37 +618,28 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v, u64 iova, u64 size) > } > } > > -static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, > - struct vhost_iotlb_msg *msg) > +static int vhost_vdpa_pa_map(struct vhost_vdpa *v, > + u64 iova, u64 size, u64 uaddr, u32 perm) > { > struct vhost_dev *dev = &v->vdev; > - struct vhost_iotlb *iotlb = dev->iotlb; > struct page **page_list; > unsigned long list_size = PAGE_SIZE / sizeof(struct page *); > unsigned int gup_flags = FOLL_LONGTERM; > unsigned long npages, cur_base, map_pfn, last_pfn = 0; > unsigned long lock_limit, sz2pin, nchunks, i; > - u64 iova = msg->iova; > + u64 start = iova; > long pinned; > int ret = 0; > > - if (msg->iova < v->range.first || > - msg->iova + msg->size - 1 > v->range.last) > - return -EINVAL; This is not related to your patch, but can the "msg->iova + msg->size" addition can have an integer overflow. From looking at the callers it seems like it can. msg comes from: vhost_chr_write_iter() --> dev->msg_handler(dev, &msg); --> vhost_vdpa_process_iotlb_msg() --> vhost_vdpa_process_iotlb_update() If I'm thinking of the right thing then these are allowed to overflow to 0 because of the " - 1" but not further than that. I believe the check needs to be something like: if (msg->iova < v->range.first || msg->iova - 1 > U64_MAX - msg->size || msg->iova + msg->size - 1 > v->range.last) But writing integer overflow check correctly is notoriously difficult. Do you think you could send a fix for that which is separate from the patcheset? We'd want to backport it to stable. regards, dan carpenter _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 13/17] vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap() 2021-07-13 11:31 ` [PATCH v9 13/17] vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap() Dan Carpenter @ 2021-07-14 2:14 ` Jason Wang 2021-07-14 8:05 ` Dan Carpenter 0 siblings, 1 reply; 18+ messages in thread From: Jason Wang @ 2021-07-14 2:14 UTC (permalink / raw) To: Dan Carpenter, Xie Yongji Cc: kvm, mst, virtualization, christian.brauner, corbet, joro, willy, hch, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, gregkh, rdunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, mika.penttila 在 2021/7/13 下午7:31, Dan Carpenter 写道: > On Tue, Jul 13, 2021 at 04:46:52PM +0800, Xie Yongji wrote: >> @@ -613,37 +618,28 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v, u64 iova, u64 size) >> } >> } >> >> -static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, >> - struct vhost_iotlb_msg *msg) >> +static int vhost_vdpa_pa_map(struct vhost_vdpa *v, >> + u64 iova, u64 size, u64 uaddr, u32 perm) >> { >> struct vhost_dev *dev = &v->vdev; >> - struct vhost_iotlb *iotlb = dev->iotlb; >> struct page **page_list; >> unsigned long list_size = PAGE_SIZE / sizeof(struct page *); >> unsigned int gup_flags = FOLL_LONGTERM; >> unsigned long npages, cur_base, map_pfn, last_pfn = 0; >> unsigned long lock_limit, sz2pin, nchunks, i; >> - u64 iova = msg->iova; >> + u64 start = iova; >> long pinned; >> int ret = 0; >> >> - if (msg->iova < v->range.first || >> - msg->iova + msg->size - 1 > v->range.last) >> - return -EINVAL; > This is not related to your patch, but can the "msg->iova + msg->size" > addition can have an integer overflow. From looking at the callers it > seems like it can. msg comes from: > vhost_chr_write_iter() > --> dev->msg_handler(dev, &msg); > --> vhost_vdpa_process_iotlb_msg() > --> vhost_vdpa_process_iotlb_update() Yes. > > If I'm thinking of the right thing then these are allowed to overflow to > 0 because of the " - 1" but not further than that. I believe the check > needs to be something like: > > if (msg->iova < v->range.first || > msg->iova - 1 > U64_MAX - msg->size || I guess we don't need - 1 here? Thanks > msg->iova + msg->size - 1 > v->range.last) > > But writing integer overflow check correctly is notoriously difficult. > Do you think you could send a fix for that which is separate from the > patcheset? We'd want to backport it to stable. > > regards, > dan carpenter > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 13/17] vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap() 2021-07-14 2:14 ` Jason Wang @ 2021-07-14 8:05 ` Dan Carpenter 2021-07-14 9:41 ` Jason Wang 0 siblings, 1 reply; 18+ messages in thread From: Dan Carpenter @ 2021-07-14 8:05 UTC (permalink / raw) To: Jason Wang Cc: kvm, mst, virtualization, christian.brauner, corbet, joro, willy, hch, Xie Yongji, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, gregkh, rdunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, mika.penttila On Wed, Jul 14, 2021 at 10:14:32AM +0800, Jason Wang wrote: > > 在 2021/7/13 下午7:31, Dan Carpenter 写道: > > On Tue, Jul 13, 2021 at 04:46:52PM +0800, Xie Yongji wrote: > > > @@ -613,37 +618,28 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v, u64 iova, u64 size) > > > } > > > } > > > -static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, > > > - struct vhost_iotlb_msg *msg) > > > +static int vhost_vdpa_pa_map(struct vhost_vdpa *v, > > > + u64 iova, u64 size, u64 uaddr, u32 perm) > > > { > > > struct vhost_dev *dev = &v->vdev; > > > - struct vhost_iotlb *iotlb = dev->iotlb; > > > struct page **page_list; > > > unsigned long list_size = PAGE_SIZE / sizeof(struct page *); > > > unsigned int gup_flags = FOLL_LONGTERM; > > > unsigned long npages, cur_base, map_pfn, last_pfn = 0; > > > unsigned long lock_limit, sz2pin, nchunks, i; > > > - u64 iova = msg->iova; > > > + u64 start = iova; > > > long pinned; > > > int ret = 0; > > > - if (msg->iova < v->range.first || > > > - msg->iova + msg->size - 1 > v->range.last) > > > - return -EINVAL; > > This is not related to your patch, but can the "msg->iova + msg->size" > > addition can have an integer overflow. From looking at the callers it > > seems like it can. msg comes from: > > vhost_chr_write_iter() > > --> dev->msg_handler(dev, &msg); > > --> vhost_vdpa_process_iotlb_msg() > > --> vhost_vdpa_process_iotlb_update() > > > Yes. > > > > > > If I'm thinking of the right thing then these are allowed to overflow to > > 0 because of the " - 1" but not further than that. I believe the check > > needs to be something like: > > > > if (msg->iova < v->range.first || > > msg->iova - 1 > U64_MAX - msg->size || > > > I guess we don't need - 1 here? The - 1 is important. The highest address is 0xffffffff. So it goes start + size = 0 and then start + size - 1 == 0xffffffff. I guess we could move the - 1 to the other side? msg->iova > U64_MAX - msg->size + 1 || regards, dan carpenter _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 13/17] vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap() 2021-07-14 8:05 ` Dan Carpenter @ 2021-07-14 9:41 ` Jason Wang 2021-07-14 9:57 ` Dan Carpenter 0 siblings, 1 reply; 18+ messages in thread From: Jason Wang @ 2021-07-14 9:41 UTC (permalink / raw) To: Dan Carpenter Cc: kvm, mst, virtualization, christian.brauner, corbet, joro, willy, hch, Xie Yongji, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, gregkh, rdunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, mika.penttila 在 2021/7/14 下午4:05, Dan Carpenter 写道: > On Wed, Jul 14, 2021 at 10:14:32AM +0800, Jason Wang wrote: >> 在 2021/7/13 下午7:31, Dan Carpenter 写道: >>> On Tue, Jul 13, 2021 at 04:46:52PM +0800, Xie Yongji wrote: >>>> @@ -613,37 +618,28 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v, u64 iova, u64 size) >>>> } >>>> } >>>> -static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, >>>> - struct vhost_iotlb_msg *msg) >>>> +static int vhost_vdpa_pa_map(struct vhost_vdpa *v, >>>> + u64 iova, u64 size, u64 uaddr, u32 perm) >>>> { >>>> struct vhost_dev *dev = &v->vdev; >>>> - struct vhost_iotlb *iotlb = dev->iotlb; >>>> struct page **page_list; >>>> unsigned long list_size = PAGE_SIZE / sizeof(struct page *); >>>> unsigned int gup_flags = FOLL_LONGTERM; >>>> unsigned long npages, cur_base, map_pfn, last_pfn = 0; >>>> unsigned long lock_limit, sz2pin, nchunks, i; >>>> - u64 iova = msg->iova; >>>> + u64 start = iova; >>>> long pinned; >>>> int ret = 0; >>>> - if (msg->iova < v->range.first || >>>> - msg->iova + msg->size - 1 > v->range.last) >>>> - return -EINVAL; >>> This is not related to your patch, but can the "msg->iova + msg->size" >>> addition can have an integer overflow. From looking at the callers it >>> seems like it can. msg comes from: >>> vhost_chr_write_iter() >>> --> dev->msg_handler(dev, &msg); >>> --> vhost_vdpa_process_iotlb_msg() >>> --> vhost_vdpa_process_iotlb_update() >> >> Yes. >> >> >>> If I'm thinking of the right thing then these are allowed to overflow to >>> 0 because of the " - 1" but not further than that. I believe the check >>> needs to be something like: >>> >>> if (msg->iova < v->range.first || >>> msg->iova - 1 > U64_MAX - msg->size || >> >> I guess we don't need - 1 here? > The - 1 is important. The highest address is 0xffffffff. So it goes > start + size = 0 and then start + size - 1 == 0xffffffff. Right, so actually msg->iova = 0xfffffffe, msg->size=2 is valid. Thanks > > I guess we could move the - 1 to the other side? > > msg->iova > U64_MAX - msg->size + 1 || > > regards, > dan carpenter > > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 13/17] vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap() 2021-07-14 9:41 ` Jason Wang @ 2021-07-14 9:57 ` Dan Carpenter 2021-07-15 2:20 ` Jason Wang 0 siblings, 1 reply; 18+ messages in thread From: Dan Carpenter @ 2021-07-14 9:57 UTC (permalink / raw) To: Jason Wang Cc: kvm, mst, virtualization, christian.brauner, corbet, joro, willy, hch, Xie Yongji, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, gregkh, rdunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, mika.penttila On Wed, Jul 14, 2021 at 05:41:54PM +0800, Jason Wang wrote: > > 在 2021/7/14 下午4:05, Dan Carpenter 写道: > > On Wed, Jul 14, 2021 at 10:14:32AM +0800, Jason Wang wrote: > > > 在 2021/7/13 下午7:31, Dan Carpenter 写道: > > > > On Tue, Jul 13, 2021 at 04:46:52PM +0800, Xie Yongji wrote: > > > > > @@ -613,37 +618,28 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v, u64 iova, u64 size) > > > > > } > > > > > } > > > > > -static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, > > > > > - struct vhost_iotlb_msg *msg) > > > > > +static int vhost_vdpa_pa_map(struct vhost_vdpa *v, > > > > > + u64 iova, u64 size, u64 uaddr, u32 perm) > > > > > { > > > > > struct vhost_dev *dev = &v->vdev; > > > > > - struct vhost_iotlb *iotlb = dev->iotlb; > > > > > struct page **page_list; > > > > > unsigned long list_size = PAGE_SIZE / sizeof(struct page *); > > > > > unsigned int gup_flags = FOLL_LONGTERM; > > > > > unsigned long npages, cur_base, map_pfn, last_pfn = 0; > > > > > unsigned long lock_limit, sz2pin, nchunks, i; > > > > > - u64 iova = msg->iova; > > > > > + u64 start = iova; > > > > > long pinned; > > > > > int ret = 0; > > > > > - if (msg->iova < v->range.first || > > > > > - msg->iova + msg->size - 1 > v->range.last) > > > > > - return -EINVAL; > > > > This is not related to your patch, but can the "msg->iova + msg->size" > > > > addition can have an integer overflow. From looking at the callers it > > > > seems like it can. msg comes from: > > > > vhost_chr_write_iter() > > > > --> dev->msg_handler(dev, &msg); > > > > --> vhost_vdpa_process_iotlb_msg() > > > > --> vhost_vdpa_process_iotlb_update() > > > > > > Yes. > > > > > > > > > > If I'm thinking of the right thing then these are allowed to overflow to > > > > 0 because of the " - 1" but not further than that. I believe the check > > > > needs to be something like: > > > > > > > > if (msg->iova < v->range.first || > > > > msg->iova - 1 > U64_MAX - msg->size || > > > > > > I guess we don't need - 1 here? > > The - 1 is important. The highest address is 0xffffffff. So it goes > > start + size = 0 and then start + size - 1 == 0xffffffff. > > > Right, so actually > > msg->iova = 0xfffffffe, msg->size=2 is valid. I believe so, yes. It's inclusive of 0xfffffffe and 0xffffffff. (Not an expert). regards, dan carpenter _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 13/17] vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap() 2021-07-14 9:57 ` Dan Carpenter @ 2021-07-15 2:20 ` Jason Wang 0 siblings, 0 replies; 18+ messages in thread From: Jason Wang @ 2021-07-15 2:20 UTC (permalink / raw) To: Dan Carpenter Cc: kvm, mst, virtualization, christian.brauner, corbet, joro, willy, hch, Xie Yongji, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, gregkh, rdunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, mika.penttila 在 2021/7/14 下午5:57, Dan Carpenter 写道: > On Wed, Jul 14, 2021 at 05:41:54PM +0800, Jason Wang wrote: >> 在 2021/7/14 下午4:05, Dan Carpenter 写道: >>> On Wed, Jul 14, 2021 at 10:14:32AM +0800, Jason Wang wrote: >>>> 在 2021/7/13 下午7:31, Dan Carpenter 写道: >>>>> On Tue, Jul 13, 2021 at 04:46:52PM +0800, Xie Yongji wrote: >>>>>> @@ -613,37 +618,28 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v, u64 iova, u64 size) >>>>>> } >>>>>> } >>>>>> -static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, >>>>>> - struct vhost_iotlb_msg *msg) >>>>>> +static int vhost_vdpa_pa_map(struct vhost_vdpa *v, >>>>>> + u64 iova, u64 size, u64 uaddr, u32 perm) >>>>>> { >>>>>> struct vhost_dev *dev = &v->vdev; >>>>>> - struct vhost_iotlb *iotlb = dev->iotlb; >>>>>> struct page **page_list; >>>>>> unsigned long list_size = PAGE_SIZE / sizeof(struct page *); >>>>>> unsigned int gup_flags = FOLL_LONGTERM; >>>>>> unsigned long npages, cur_base, map_pfn, last_pfn = 0; >>>>>> unsigned long lock_limit, sz2pin, nchunks, i; >>>>>> - u64 iova = msg->iova; >>>>>> + u64 start = iova; >>>>>> long pinned; >>>>>> int ret = 0; >>>>>> - if (msg->iova < v->range.first || >>>>>> - msg->iova + msg->size - 1 > v->range.last) >>>>>> - return -EINVAL; >>>>> This is not related to your patch, but can the "msg->iova + msg->size" >>>>> addition can have an integer overflow. From looking at the callers it >>>>> seems like it can. msg comes from: >>>>> vhost_chr_write_iter() >>>>> --> dev->msg_handler(dev, &msg); >>>>> --> vhost_vdpa_process_iotlb_msg() >>>>> --> vhost_vdpa_process_iotlb_update() >>>> Yes. >>>> >>>> >>>>> If I'm thinking of the right thing then these are allowed to overflow to >>>>> 0 because of the " - 1" but not further than that. I believe the check >>>>> needs to be something like: >>>>> >>>>> if (msg->iova < v->range.first || >>>>> msg->iova - 1 > U64_MAX - msg->size || >>>> I guess we don't need - 1 here? >>> The - 1 is important. The highest address is 0xffffffff. So it goes >>> start + size = 0 and then start + size - 1 == 0xffffffff. >> >> Right, so actually >> >> msg->iova = 0xfffffffe, msg->size=2 is valid. > I believe so, yes. It's inclusive of 0xfffffffe and 0xffffffff. > (Not an expert). I think so, and we probably need to fix vhost_overflow() as well which did: static bool vhost_overflow(u64 uaddr, u64 size) { /* Make sure 64 bit math will not overflow. */ return uaddr > ULONG_MAX || size > ULONG_MAX || uaddr > ULONG_MAX - size; } Thanks > > regards, > dan carpenter > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20210713084656.232-4-xieyongji@bytedance.com>]
* Re: [PATCH v9 03/17] vdpa: Fix code indentation [not found] ` <20210713084656.232-4-xieyongji@bytedance.com> @ 2021-07-14 4:20 ` Joe Perches 0 siblings, 0 replies; 18+ messages in thread From: Joe Perches @ 2021-07-14 4:20 UTC (permalink / raw) To: Xie Yongji, mst, jasowang, stefanha, sgarzare, parav, hch, christian.brauner, rdunlap, willy, viro, axboe, bcrl, corbet, mika.penttila, dan.carpenter, joro, gregkh, zhe.he, xiaodong.liu Cc: kvm, netdev, linux-kernel, virtualization, iommu, songmuchun, linux-fsdevel On Tue, 2021-07-13 at 16:46 +0800, Xie Yongji wrote: > Use tabs to indent the code instead of spaces. There are a lot more of these in this file. $ ./scripts/checkpatch.pl --fix-inplace --strict include/linux/vdpa.h and a little typing gives: --- include/linux/vdpa.h | 50 +++++++++++++++++++++++++------------------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index 3357ac98878d4..14cd4248e59fd 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -43,17 +43,17 @@ struct vdpa_vq_state_split { * @last_used_idx: used index */ struct vdpa_vq_state_packed { - u16 last_avail_counter:1; - u16 last_avail_idx:15; - u16 last_used_counter:1; - u16 last_used_idx:15; + u16 last_avail_counter:1; + u16 last_avail_idx:15; + u16 last_used_counter:1; + u16 last_used_idx:15; }; struct vdpa_vq_state { - union { - struct vdpa_vq_state_split split; - struct vdpa_vq_state_packed packed; - }; + union { + struct vdpa_vq_state_split split; + struct vdpa_vq_state_packed packed; + }; }; struct vdpa_mgmt_dev; @@ -131,7 +131,7 @@ struct vdpa_iova_range { * @vdev: vdpa device * @idx: virtqueue index * @state: pointer to returned state (last_avail_idx) - * @get_vq_notification: Get the notification area for a virtqueue + * @get_vq_notification: Get the notification area for a virtqueue * @vdev: vdpa device * @idx: virtqueue index * Returns the notifcation area @@ -277,13 +277,13 @@ struct vdpa_device *__vdpa_alloc_device(struct device *parent, const struct vdpa_config_ops *config, size_t size, const char *name); -#define vdpa_alloc_device(dev_struct, member, parent, config, name) \ - container_of(__vdpa_alloc_device( \ - parent, config, \ - sizeof(dev_struct) + \ - BUILD_BUG_ON_ZERO(offsetof( \ - dev_struct, member)), name), \ - dev_struct, member) +#define vdpa_alloc_device(dev_struct, member, parent, config, name) \ + container_of(__vdpa_alloc_device(parent, config, \ + sizeof(dev_struct) + \ + BUILD_BUG_ON_ZERO(offsetof(dev_struct, \ + member)), \ + name), \ + dev_struct, member) int vdpa_register_device(struct vdpa_device *vdev, int nvqs); void vdpa_unregister_device(struct vdpa_device *vdev); @@ -308,8 +308,8 @@ struct vdpa_driver { int __vdpa_register_driver(struct vdpa_driver *drv, struct module *owner); void vdpa_unregister_driver(struct vdpa_driver *drv); -#define module_vdpa_driver(__vdpa_driver) \ - module_driver(__vdpa_driver, vdpa_register_driver, \ +#define module_vdpa_driver(__vdpa_driver) \ + module_driver(__vdpa_driver, vdpa_register_driver, \ vdpa_unregister_driver) static inline struct vdpa_driver *drv_to_vdpa(struct device_driver *driver) @@ -339,25 +339,25 @@ static inline struct device *vdpa_get_dma_dev(struct vdpa_device *vdev) static inline void vdpa_reset(struct vdpa_device *vdev) { - const struct vdpa_config_ops *ops = vdev->config; + const struct vdpa_config_ops *ops = vdev->config; vdev->features_valid = false; - ops->set_status(vdev, 0); + ops->set_status(vdev, 0); } static inline int vdpa_set_features(struct vdpa_device *vdev, u64 features) { - const struct vdpa_config_ops *ops = vdev->config; + const struct vdpa_config_ops *ops = vdev->config; vdev->features_valid = true; - return ops->set_features(vdev, features); + return ops->set_features(vdev, features); } - -static inline void vdpa_get_config(struct vdpa_device *vdev, unsigned offset, +static inline void vdpa_get_config(struct vdpa_device *vdev, + unsigned int offset, void *buf, unsigned int len) { - const struct vdpa_config_ops *ops = vdev->config; + const struct vdpa_config_ops *ops = vdev->config; /* * Config accesses aren't supposed to trigger before features are set. _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply related [flat|nested] 18+ messages in thread
[parent not found: <20210713084656.232-17-xieyongji@bytedance.com>]
* Re: [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace [not found] ` <20210713084656.232-17-xieyongji@bytedance.com> @ 2021-07-13 13:27 ` Dan Carpenter 2021-07-14 2:54 ` Jason Wang 2021-07-14 5:45 ` Jason Wang 1 sibling, 1 reply; 18+ messages in thread From: Dan Carpenter @ 2021-07-13 13:27 UTC (permalink / raw) To: Xie Yongji, Jason Wang Cc: kvm, mst, virtualization, christian.brauner, corbet, joro, willy, hch, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, gregkh, rdunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, mika.penttila On Tue, Jul 13, 2021 at 04:46:55PM +0800, Xie Yongji wrote: > +static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name) > +{ > + struct vduse_vdpa *vdev; > + int ret; > + > + if (dev->vdev) > + return -EEXIST; > + > + vdev = vdpa_alloc_device(struct vduse_vdpa, vdpa, dev->dev, > + &vduse_vdpa_config_ops, name, true); > + if (!vdev) > + return -ENOMEM; This should be an IS_ERR() check instead of a NULL check. The vdpa_alloc_device() macro is doing something very complicated but I'm not sure what. It calls container_of() and that looks buggy until you spot the BUILD_BUG_ON_ZERO() compile time assert which ensures that the container_of() is a no-op. Only one of the callers checks for error pointers correctly so maybe it's too complicated or maybe there should be better documentation. regards, dan carpenter _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace 2021-07-13 13:27 ` [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace Dan Carpenter @ 2021-07-14 2:54 ` Jason Wang 0 siblings, 0 replies; 18+ messages in thread From: Jason Wang @ 2021-07-14 2:54 UTC (permalink / raw) To: Dan Carpenter, Xie Yongji Cc: kvm, mst, virtualization, christian.brauner, corbet, joro, willy, hch, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, gregkh, rdunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, mika.penttila 在 2021/7/13 下午9:27, Dan Carpenter 写道: > On Tue, Jul 13, 2021 at 04:46:55PM +0800, Xie Yongji wrote: >> +static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name) >> +{ >> + struct vduse_vdpa *vdev; >> + int ret; >> + >> + if (dev->vdev) >> + return -EEXIST; >> + >> + vdev = vdpa_alloc_device(struct vduse_vdpa, vdpa, dev->dev, >> + &vduse_vdpa_config_ops, name, true); >> + if (!vdev) >> + return -ENOMEM; > This should be an IS_ERR() check instead of a NULL check. Yes. > > The vdpa_alloc_device() macro is doing something very complicated but > I'm not sure what. It calls container_of() and that looks buggy until > you spot the BUILD_BUG_ON_ZERO() compile time assert which ensures that > the container_of() is a no-op. > > Only one of the callers checks for error pointers correctly so maybe > it's too complicated or maybe there should be better documentation. We need better documentation for this macro and fix all the buggy callers. Yong Ji, want to do that? Thanks > > regards, > dan carpenter > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace [not found] ` <20210713084656.232-17-xieyongji@bytedance.com> 2021-07-13 13:27 ` [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace Dan Carpenter @ 2021-07-14 5:45 ` Jason Wang 2021-07-14 5:54 ` Michael S. Tsirkin [not found] ` <CACycT3uh+wUeDM1H7JiCJTMeCVCBngURGKeXD-h+meekNNwiQw@mail.gmail.com> 1 sibling, 2 replies; 18+ messages in thread From: Jason Wang @ 2021-07-14 5:45 UTC (permalink / raw) To: Xie Yongji, mst, stefanha, sgarzare, parav, hch, christian.brauner, rdunlap, willy, viro, axboe, bcrl, corbet, mika.penttila, dan.carpenter, joro, gregkh, zhe.he, xiaodong.liu Cc: kvm, netdev, linux-kernel, virtualization, iommu, songmuchun, linux-fsdevel 在 2021/7/13 下午4:46, Xie Yongji 写道: > This VDUSE driver enables implementing software-emulated vDPA > devices in userspace. The vDPA device is created by > ioctl(VDUSE_CREATE_DEV) on /dev/vduse/control. Then a char device > interface (/dev/vduse/$NAME) is exported to userspace for device > emulation. > > In order to make the device emulation more secure, the device's > control path is handled in kernel. A message mechnism is introduced > to forward some dataplane related control messages to userspace. > > And in the data path, the DMA buffer will be mapped into userspace > address space through different ways depending on the vDPA bus to > which the vDPA device is attached. In virtio-vdpa case, the MMU-based > IOMMU driver is used to achieve that. And in vhost-vdpa case, the > DMA buffer is reside in a userspace memory region which can be shared > to the VDUSE userspace processs via transferring the shmfd. > > For more details on VDUSE design and usage, please see the follow-on > Documentation commit. > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > --- > Documentation/userspace-api/ioctl/ioctl-number.rst | 1 + > drivers/vdpa/Kconfig | 10 + > drivers/vdpa/Makefile | 1 + > drivers/vdpa/vdpa_user/Makefile | 5 + > drivers/vdpa/vdpa_user/vduse_dev.c | 1502 ++++++++++++++++++++ > include/uapi/linux/vduse.h | 221 +++ > 6 files changed, 1740 insertions(+) > create mode 100644 drivers/vdpa/vdpa_user/Makefile > create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c > create mode 100644 include/uapi/linux/vduse.h > > diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst > index 1409e40e6345..293ca3aef358 100644 > --- a/Documentation/userspace-api/ioctl/ioctl-number.rst > +++ b/Documentation/userspace-api/ioctl/ioctl-number.rst > @@ -300,6 +300,7 @@ Code Seq# Include File Comments > 'z' 10-4F drivers/s390/crypto/zcrypt_api.h conflict! > '|' 00-7F linux/media.h > 0x80 00-1F linux/fb.h > +0x81 00-1F linux/vduse.h > 0x89 00-06 arch/x86/include/asm/sockios.h > 0x89 0B-DF linux/sockios.h > 0x89 E0-EF linux/sockios.h SIOCPROTOPRIVATE range > diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig > index a503c1b2bfd9..6e23bce6433a 100644 > --- a/drivers/vdpa/Kconfig > +++ b/drivers/vdpa/Kconfig > @@ -33,6 +33,16 @@ config VDPA_SIM_BLOCK > vDPA block device simulator which terminates IO request in a > memory buffer. > > +config VDPA_USER > + tristate "VDUSE (vDPA Device in Userspace) support" > + depends on EVENTFD && MMU && HAS_DMA > + select DMA_OPS > + select VHOST_IOTLB > + select IOMMU_IOVA > + help > + With VDUSE it is possible to emulate a vDPA Device > + in a userspace program. > + > config IFCVF > tristate "Intel IFC VF vDPA driver" > depends on PCI_MSI > diff --git a/drivers/vdpa/Makefile b/drivers/vdpa/Makefile > index 67fe7f3d6943..f02ebed33f19 100644 > --- a/drivers/vdpa/Makefile > +++ b/drivers/vdpa/Makefile > @@ -1,6 +1,7 @@ > # SPDX-License-Identifier: GPL-2.0 > obj-$(CONFIG_VDPA) += vdpa.o > obj-$(CONFIG_VDPA_SIM) += vdpa_sim/ > +obj-$(CONFIG_VDPA_USER) += vdpa_user/ > obj-$(CONFIG_IFCVF) += ifcvf/ > obj-$(CONFIG_MLX5_VDPA) += mlx5/ > obj-$(CONFIG_VP_VDPA) += virtio_pci/ > diff --git a/drivers/vdpa/vdpa_user/Makefile b/drivers/vdpa/vdpa_user/Makefile > new file mode 100644 > index 000000000000..260e0b26af99 > --- /dev/null > +++ b/drivers/vdpa/vdpa_user/Makefile > @@ -0,0 +1,5 @@ > +# SPDX-License-Identifier: GPL-2.0 > + > +vduse-y := vduse_dev.o iova_domain.o > + > +obj-$(CONFIG_VDPA_USER) += vduse.o > diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c > new file mode 100644 > index 000000000000..c994a4a4660c > --- /dev/null > +++ b/drivers/vdpa/vdpa_user/vduse_dev.c > @@ -0,0 +1,1502 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* > + * VDUSE: vDPA Device in Userspace > + * > + * Copyright (C) 2020-2021 Bytedance Inc. and/or its affiliates. All rights reserved. > + * > + * Author: Xie Yongji <xieyongji@bytedance.com> > + * > + */ > + > +#include <linux/init.h> > +#include <linux/module.h> > +#include <linux/cdev.h> > +#include <linux/device.h> > +#include <linux/eventfd.h> > +#include <linux/slab.h> > +#include <linux/wait.h> > +#include <linux/dma-map-ops.h> > +#include <linux/poll.h> > +#include <linux/file.h> > +#include <linux/uio.h> > +#include <linux/vdpa.h> > +#include <linux/nospec.h> > +#include <uapi/linux/vduse.h> > +#include <uapi/linux/vdpa.h> > +#include <uapi/linux/virtio_config.h> > +#include <uapi/linux/virtio_ids.h> > +#include <uapi/linux/virtio_blk.h> > +#include <linux/mod_devicetable.h> > + > +#include "iova_domain.h" > + > +#define DRV_AUTHOR "Yongji Xie <xieyongji@bytedance.com>" > +#define DRV_DESC "vDPA Device in Userspace" > +#define DRV_LICENSE "GPL v2" > + > +#define VDUSE_DEV_MAX (1U << MINORBITS) > +#define VDUSE_MAX_BOUNCE_SIZE (64 * 1024 * 1024) > +#define VDUSE_IOVA_SIZE (128 * 1024 * 1024) > +#define VDUSE_REQUEST_TIMEOUT 30 > + > +struct vduse_virtqueue { > + u16 index; > + u16 num_max; > + u32 num; > + u64 desc_addr; > + u64 driver_addr; > + u64 device_addr; > + struct vdpa_vq_state state; > + bool ready; > + bool kicked; > + spinlock_t kick_lock; > + spinlock_t irq_lock; > + struct eventfd_ctx *kickfd; > + struct vdpa_callback cb; > + struct work_struct inject; > +}; > + > +struct vduse_dev; > + > +struct vduse_vdpa { > + struct vdpa_device vdpa; > + struct vduse_dev *dev; > +}; > + > +struct vduse_dev { > + struct vduse_vdpa *vdev; > + struct device *dev; > + struct vduse_virtqueue *vqs; > + struct vduse_iova_domain *domain; > + char *name; > + struct mutex lock; > + spinlock_t msg_lock; > + u64 msg_unique; > + wait_queue_head_t waitq; > + struct list_head send_list; > + struct list_head recv_list; > + struct vdpa_callback config_cb; > + struct work_struct inject; > + spinlock_t irq_lock; > + int minor; > + bool connected; > + u64 api_version; > + u64 device_features; > + u64 driver_features; > + u32 device_id; > + u32 vendor_id; > + u32 generation; > + u32 config_size; > + void *config; > + u8 status; > + u32 vq_num; > + u32 vq_align; > +}; > + > +struct vduse_dev_msg { > + struct vduse_dev_request req; > + struct vduse_dev_response resp; > + struct list_head list; > + wait_queue_head_t waitq; > + bool completed; > +}; > + > +struct vduse_control { > + u64 api_version; > +}; > + > +static DEFINE_MUTEX(vduse_lock); > +static DEFINE_IDR(vduse_idr); > + > +static dev_t vduse_major; > +static struct class *vduse_class; > +static struct cdev vduse_ctrl_cdev; > +static struct cdev vduse_cdev; > +static struct workqueue_struct *vduse_irq_wq; > + > +static u32 allowed_device_id[] = { > + VIRTIO_ID_BLOCK, > +}; > + > +static inline struct vduse_dev *vdpa_to_vduse(struct vdpa_device *vdpa) > +{ > + struct vduse_vdpa *vdev = container_of(vdpa, struct vduse_vdpa, vdpa); > + > + return vdev->dev; > +} > + > +static inline struct vduse_dev *dev_to_vduse(struct device *dev) > +{ > + struct vdpa_device *vdpa = dev_to_vdpa(dev); > + > + return vdpa_to_vduse(vdpa); > +} > + > +static struct vduse_dev_msg *vduse_find_msg(struct list_head *head, > + uint32_t request_id) > +{ > + struct vduse_dev_msg *msg; > + > + list_for_each_entry(msg, head, list) { > + if (msg->req.request_id == request_id) { > + list_del(&msg->list); > + return msg; > + } > + } > + > + return NULL; > +} > + > +static struct vduse_dev_msg *vduse_dequeue_msg(struct list_head *head) > +{ > + struct vduse_dev_msg *msg = NULL; > + > + if (!list_empty(head)) { > + msg = list_first_entry(head, struct vduse_dev_msg, list); > + list_del(&msg->list); > + } > + > + return msg; > +} > + > +static void vduse_enqueue_msg(struct list_head *head, > + struct vduse_dev_msg *msg) > +{ > + list_add_tail(&msg->list, head); > +} > + > +static int vduse_dev_msg_sync(struct vduse_dev *dev, > + struct vduse_dev_msg *msg) > +{ > + int ret; > + > + init_waitqueue_head(&msg->waitq); > + spin_lock(&dev->msg_lock); > + msg->req.request_id = dev->msg_unique++; > + vduse_enqueue_msg(&dev->send_list, msg); > + wake_up(&dev->waitq); > + spin_unlock(&dev->msg_lock); > + > + wait_event_killable_timeout(msg->waitq, msg->completed, > + VDUSE_REQUEST_TIMEOUT * HZ); > + spin_lock(&dev->msg_lock); > + if (!msg->completed) { > + list_del(&msg->list); > + msg->resp.result = VDUSE_REQ_RESULT_FAILED; > + } > + ret = (msg->resp.result == VDUSE_REQ_RESULT_OK) ? 0 : -EIO; I think we should mark the device as malfunction when there is a timeout and forbid any userspace operations except for the destroy aftwards for safety. > + spin_unlock(&dev->msg_lock); > + > + return ret; > +} > + > +static int vduse_dev_get_vq_state_packed(struct vduse_dev *dev, > + struct vduse_virtqueue *vq, > + struct vdpa_vq_state_packed *packed) > +{ > + struct vduse_dev_msg msg = { 0 }; > + int ret; > + > + msg.req.type = VDUSE_GET_VQ_STATE; > + msg.req.vq_state.index = vq->index; > + > + ret = vduse_dev_msg_sync(dev, &msg); > + if (ret) > + return ret; > + > + packed->last_avail_counter = > + msg.resp.vq_state.packed.last_avail_counter; > + packed->last_avail_idx = msg.resp.vq_state.packed.last_avail_idx; > + packed->last_used_counter = msg.resp.vq_state.packed.last_used_counter; > + packed->last_used_idx = msg.resp.vq_state.packed.last_used_idx; > + > + return 0; > +} > + > +static int vduse_dev_get_vq_state_split(struct vduse_dev *dev, > + struct vduse_virtqueue *vq, > + struct vdpa_vq_state_split *split) > +{ > + struct vduse_dev_msg msg = { 0 }; > + int ret; > + > + msg.req.type = VDUSE_GET_VQ_STATE; > + msg.req.vq_state.index = vq->index; > + > + ret = vduse_dev_msg_sync(dev, &msg); > + if (ret) > + return ret; > + > + split->avail_index = msg.resp.vq_state.split.avail_index; > + > + return 0; > +} > + > +static int vduse_dev_set_status(struct vduse_dev *dev, u8 status) > +{ > + struct vduse_dev_msg msg = { 0 }; > + > + msg.req.type = VDUSE_SET_STATUS; > + msg.req.s.status = status; > + > + return vduse_dev_msg_sync(dev, &msg); > +} > + > +static int vduse_dev_update_iotlb(struct vduse_dev *dev, > + u64 start, u64 last) > +{ > + struct vduse_dev_msg msg = { 0 }; > + > + if (last < start) > + return -EINVAL; > + > + msg.req.type = VDUSE_UPDATE_IOTLB; > + msg.req.iova.start = start; > + msg.req.iova.last = last; > + > + return vduse_dev_msg_sync(dev, &msg); > +} > + > +static ssize_t vduse_dev_read_iter(struct kiocb *iocb, struct iov_iter *to) > +{ > + struct file *file = iocb->ki_filp; > + struct vduse_dev *dev = file->private_data; > + struct vduse_dev_msg *msg; > + int size = sizeof(struct vduse_dev_request); > + ssize_t ret; > + > + if (iov_iter_count(to) < size) > + return -EINVAL; > + > + spin_lock(&dev->msg_lock); > + while (1) { > + msg = vduse_dequeue_msg(&dev->send_list); > + if (msg) > + break; > + > + ret = -EAGAIN; > + if (file->f_flags & O_NONBLOCK) > + goto unlock; > + > + spin_unlock(&dev->msg_lock); > + ret = wait_event_interruptible_exclusive(dev->waitq, > + !list_empty(&dev->send_list)); > + if (ret) > + return ret; > + > + spin_lock(&dev->msg_lock); > + } > + spin_unlock(&dev->msg_lock); > + ret = copy_to_iter(&msg->req, size, to); > + spin_lock(&dev->msg_lock); > + if (ret != size) { > + ret = -EFAULT; > + vduse_enqueue_msg(&dev->send_list, msg); > + goto unlock; > + } > + vduse_enqueue_msg(&dev->recv_list, msg); > +unlock: > + spin_unlock(&dev->msg_lock); > + > + return ret; > +} > + > +static ssize_t vduse_dev_write_iter(struct kiocb *iocb, struct iov_iter *from) > +{ > + struct file *file = iocb->ki_filp; > + struct vduse_dev *dev = file->private_data; > + struct vduse_dev_response resp; > + struct vduse_dev_msg *msg; > + size_t ret; > + > + ret = copy_from_iter(&resp, sizeof(resp), from); > + if (ret != sizeof(resp)) > + return -EINVAL; > + > + spin_lock(&dev->msg_lock); > + msg = vduse_find_msg(&dev->recv_list, resp.request_id); > + if (!msg) { > + ret = -ENOENT; > + goto unlock; > + } > + > + memcpy(&msg->resp, &resp, sizeof(resp)); > + msg->completed = 1; > + wake_up(&msg->waitq); > +unlock: > + spin_unlock(&dev->msg_lock); > + > + return ret; > +} > + > +static __poll_t vduse_dev_poll(struct file *file, poll_table *wait) > +{ > + struct vduse_dev *dev = file->private_data; > + __poll_t mask = 0; > + > + poll_wait(file, &dev->waitq, wait); > + > + if (!list_empty(&dev->send_list)) > + mask |= EPOLLIN | EPOLLRDNORM; > + if (!list_empty(&dev->recv_list)) > + mask |= EPOLLOUT | EPOLLWRNORM; > + > + return mask; > +} > + > +static void vduse_dev_reset(struct vduse_dev *dev) > +{ > + int i; > + struct vduse_iova_domain *domain = dev->domain; > + > + /* The coherent mappings are handled in vduse_dev_free_coherent() */ > + if (domain->bounce_map) > + vduse_domain_reset_bounce_map(domain); > + > + dev->driver_features = 0; > + dev->generation++; > + spin_lock(&dev->irq_lock); > + dev->config_cb.callback = NULL; > + dev->config_cb.private = NULL; > + spin_unlock(&dev->irq_lock); > + flush_work(&dev->inject); > + > + for (i = 0; i < dev->vq_num; i++) { > + struct vduse_virtqueue *vq = &dev->vqs[i]; > + > + vq->ready = false; > + vq->desc_addr = 0; > + vq->driver_addr = 0; > + vq->device_addr = 0; > + vq->num = 0; > + memset(&vq->state, 0, sizeof(vq->state)); > + > + spin_lock(&vq->kick_lock); > + vq->kicked = false; > + if (vq->kickfd) > + eventfd_ctx_put(vq->kickfd); > + vq->kickfd = NULL; > + spin_unlock(&vq->kick_lock); > + > + spin_lock(&vq->irq_lock); > + vq->cb.callback = NULL; > + vq->cb.private = NULL; > + spin_unlock(&vq->irq_lock); > + flush_work(&vq->inject); > + } > +} > + > +static int vduse_vdpa_set_vq_address(struct vdpa_device *vdpa, u16 idx, > + u64 desc_area, u64 driver_area, > + u64 device_area) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + struct vduse_virtqueue *vq = &dev->vqs[idx]; > + > + vq->desc_addr = desc_area; > + vq->driver_addr = driver_area; > + vq->device_addr = device_area; > + > + return 0; > +} > + > +static void vduse_vdpa_kick_vq(struct vdpa_device *vdpa, u16 idx) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + struct vduse_virtqueue *vq = &dev->vqs[idx]; > + > + spin_lock(&vq->kick_lock); > + if (!vq->ready) > + goto unlock; > + > + if (vq->kickfd) > + eventfd_signal(vq->kickfd, 1); > + else > + vq->kicked = true; > +unlock: > + spin_unlock(&vq->kick_lock); > +} > + > +static void vduse_vdpa_set_vq_cb(struct vdpa_device *vdpa, u16 idx, > + struct vdpa_callback *cb) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + struct vduse_virtqueue *vq = &dev->vqs[idx]; > + > + spin_lock(&vq->irq_lock); > + vq->cb.callback = cb->callback; > + vq->cb.private = cb->private; > + spin_unlock(&vq->irq_lock); > +} > + > +static void vduse_vdpa_set_vq_num(struct vdpa_device *vdpa, u16 idx, u32 num) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + struct vduse_virtqueue *vq = &dev->vqs[idx]; > + > + vq->num = num; > +} > + > +static void vduse_vdpa_set_vq_ready(struct vdpa_device *vdpa, > + u16 idx, bool ready) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + struct vduse_virtqueue *vq = &dev->vqs[idx]; > + > + vq->ready = ready; > +} > + > +static bool vduse_vdpa_get_vq_ready(struct vdpa_device *vdpa, u16 idx) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + struct vduse_virtqueue *vq = &dev->vqs[idx]; > + > + return vq->ready; > +} > + > +static int vduse_vdpa_set_vq_state(struct vdpa_device *vdpa, u16 idx, > + const struct vdpa_vq_state *state) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + struct vduse_virtqueue *vq = &dev->vqs[idx]; > + > + if (dev->driver_features & BIT_ULL(VIRTIO_F_RING_PACKED)) { > + vq->state.packed.last_avail_counter = > + state->packed.last_avail_counter; > + vq->state.packed.last_avail_idx = state->packed.last_avail_idx; > + vq->state.packed.last_used_counter = > + state->packed.last_used_counter; > + vq->state.packed.last_used_idx = state->packed.last_used_idx; > + } else > + vq->state.split.avail_index = state->split.avail_index; > + > + return 0; > +} > + > +static int vduse_vdpa_get_vq_state(struct vdpa_device *vdpa, u16 idx, > + struct vdpa_vq_state *state) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + struct vduse_virtqueue *vq = &dev->vqs[idx]; > + > + if (dev->driver_features & BIT_ULL(VIRTIO_F_RING_PACKED)) > + return vduse_dev_get_vq_state_packed(dev, vq, &state->packed); > + > + return vduse_dev_get_vq_state_split(dev, vq, &state->split); > +} > + > +static u32 vduse_vdpa_get_vq_align(struct vdpa_device *vdpa) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + return dev->vq_align; > +} > + > +static u64 vduse_vdpa_get_features(struct vdpa_device *vdpa) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + return dev->device_features; > +} > + > +static int vduse_vdpa_set_features(struct vdpa_device *vdpa, u64 features) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + dev->driver_features = features; > + return 0; > +} > + > +static void vduse_vdpa_set_config_cb(struct vdpa_device *vdpa, > + struct vdpa_callback *cb) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + spin_lock(&dev->irq_lock); > + dev->config_cb.callback = cb->callback; > + dev->config_cb.private = cb->private; > + spin_unlock(&dev->irq_lock); > +} > + > +static u16 vduse_vdpa_get_vq_num_max(struct vdpa_device *vdpa, u16 idx) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + return dev->vqs[idx].num_max; > +} > + > +static u32 vduse_vdpa_get_device_id(struct vdpa_device *vdpa) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + return dev->device_id; > +} > + > +static u32 vduse_vdpa_get_vendor_id(struct vdpa_device *vdpa) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + return dev->vendor_id; > +} > + > +static u8 vduse_vdpa_get_status(struct vdpa_device *vdpa) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + return dev->status; > +} > + > +static void vduse_vdpa_set_status(struct vdpa_device *vdpa, u8 status) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + if (vduse_dev_set_status(dev, status)) > + return; > + > + dev->status = status; It looks to me such design exclude the possibility of letting userspace device to set bit like (NEEDS_RESET) in the future. I wonder if it's better to do something similar to ccw: 1) requires the userspace to update the status bit in the response 2) update the dev->status to the status in the response if no timeout Then userspace could then set NEEDS_RESET if necessary. > + if (status == 0) > + vduse_dev_reset(dev); > +} > + > +static size_t vduse_vdpa_get_config_size(struct vdpa_device *vdpa) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + return dev->config_size; > +} > + > +static void vduse_vdpa_get_config(struct vdpa_device *vdpa, unsigned int offset, > + void *buf, unsigned int len) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + if (len > dev->config_size - offset) > + return; > + > + memcpy(buf, dev->config + offset, len); > +} > + > +static void vduse_vdpa_set_config(struct vdpa_device *vdpa, unsigned int offset, > + const void *buf, unsigned int len) > +{ > + /* Now we only support read-only configuration space */ > +} > + > +static u32 vduse_vdpa_get_generation(struct vdpa_device *vdpa) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + return dev->generation; > +} > + > +static int vduse_vdpa_set_map(struct vdpa_device *vdpa, > + struct vhost_iotlb *iotlb) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + int ret; > + > + ret = vduse_domain_set_map(dev->domain, iotlb); > + if (ret) > + return ret; > + > + ret = vduse_dev_update_iotlb(dev, 0ULL, ULLONG_MAX); > + if (ret) { > + vduse_domain_clear_map(dev->domain, iotlb); > + return ret; > + } > + > + return 0; > +} > + > +static void vduse_vdpa_free(struct vdpa_device *vdpa) > +{ > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > + > + dev->vdev = NULL; > +} > + > +static const struct vdpa_config_ops vduse_vdpa_config_ops = { > + .set_vq_address = vduse_vdpa_set_vq_address, > + .kick_vq = vduse_vdpa_kick_vq, > + .set_vq_cb = vduse_vdpa_set_vq_cb, > + .set_vq_num = vduse_vdpa_set_vq_num, > + .set_vq_ready = vduse_vdpa_set_vq_ready, > + .get_vq_ready = vduse_vdpa_get_vq_ready, > + .set_vq_state = vduse_vdpa_set_vq_state, > + .get_vq_state = vduse_vdpa_get_vq_state, > + .get_vq_align = vduse_vdpa_get_vq_align, > + .get_features = vduse_vdpa_get_features, > + .set_features = vduse_vdpa_set_features, > + .set_config_cb = vduse_vdpa_set_config_cb, > + .get_vq_num_max = vduse_vdpa_get_vq_num_max, > + .get_device_id = vduse_vdpa_get_device_id, > + .get_vendor_id = vduse_vdpa_get_vendor_id, > + .get_status = vduse_vdpa_get_status, > + .set_status = vduse_vdpa_set_status, > + .get_config_size = vduse_vdpa_get_config_size, > + .get_config = vduse_vdpa_get_config, > + .set_config = vduse_vdpa_set_config, > + .get_generation = vduse_vdpa_get_generation, > + .set_map = vduse_vdpa_set_map, > + .free = vduse_vdpa_free, > +}; > + > +static dma_addr_t vduse_dev_map_page(struct device *dev, struct page *page, > + unsigned long offset, size_t size, > + enum dma_data_direction dir, > + unsigned long attrs) > +{ > + struct vduse_dev *vdev = dev_to_vduse(dev); > + struct vduse_iova_domain *domain = vdev->domain; > + > + return vduse_domain_map_page(domain, page, offset, size, dir, attrs); > +} > + > +static void vduse_dev_unmap_page(struct device *dev, dma_addr_t dma_addr, > + size_t size, enum dma_data_direction dir, > + unsigned long attrs) > +{ > + struct vduse_dev *vdev = dev_to_vduse(dev); > + struct vduse_iova_domain *domain = vdev->domain; > + > + return vduse_domain_unmap_page(domain, dma_addr, size, dir, attrs); > +} > + > +static void *vduse_dev_alloc_coherent(struct device *dev, size_t size, > + dma_addr_t *dma_addr, gfp_t flag, > + unsigned long attrs) > +{ > + struct vduse_dev *vdev = dev_to_vduse(dev); > + struct vduse_iova_domain *domain = vdev->domain; > + unsigned long iova; > + void *addr; > + > + *dma_addr = DMA_MAPPING_ERROR; > + addr = vduse_domain_alloc_coherent(domain, size, > + (dma_addr_t *)&iova, flag, attrs); > + if (!addr) > + return NULL; > + > + *dma_addr = (dma_addr_t)iova; > + > + return addr; > +} > + > +static void vduse_dev_free_coherent(struct device *dev, size_t size, > + void *vaddr, dma_addr_t dma_addr, > + unsigned long attrs) > +{ > + struct vduse_dev *vdev = dev_to_vduse(dev); > + struct vduse_iova_domain *domain = vdev->domain; > + > + vduse_domain_free_coherent(domain, size, vaddr, dma_addr, attrs); > +} > + > +static size_t vduse_dev_max_mapping_size(struct device *dev) > +{ > + struct vduse_dev *vdev = dev_to_vduse(dev); > + struct vduse_iova_domain *domain = vdev->domain; > + > + return domain->bounce_size; > +} > + > +static const struct dma_map_ops vduse_dev_dma_ops = { > + .map_page = vduse_dev_map_page, > + .unmap_page = vduse_dev_unmap_page, > + .alloc = vduse_dev_alloc_coherent, > + .free = vduse_dev_free_coherent, > + .max_mapping_size = vduse_dev_max_mapping_size, > +}; > + > +static unsigned int perm_to_file_flags(u8 perm) > +{ > + unsigned int flags = 0; > + > + switch (perm) { > + case VDUSE_ACCESS_WO: > + flags |= O_WRONLY; > + break; > + case VDUSE_ACCESS_RO: > + flags |= O_RDONLY; > + break; > + case VDUSE_ACCESS_RW: > + flags |= O_RDWR; > + break; > + default: > + WARN(1, "invalidate vhost IOTLB permission\n"); > + break; > + } > + > + return flags; > +} > + > +static int vduse_kickfd_setup(struct vduse_dev *dev, > + struct vduse_vq_eventfd *eventfd) > +{ > + struct eventfd_ctx *ctx = NULL; > + struct vduse_virtqueue *vq; > + u32 index; > + > + if (eventfd->index >= dev->vq_num) > + return -EINVAL; > + > + index = array_index_nospec(eventfd->index, dev->vq_num); > + vq = &dev->vqs[index]; > + if (eventfd->fd >= 0) { > + ctx = eventfd_ctx_fdget(eventfd->fd); > + if (IS_ERR(ctx)) > + return PTR_ERR(ctx); > + } else if (eventfd->fd != VDUSE_EVENTFD_DEASSIGN) > + return 0; > + > + spin_lock(&vq->kick_lock); > + if (vq->kickfd) > + eventfd_ctx_put(vq->kickfd); > + vq->kickfd = ctx; > + if (vq->ready && vq->kicked && vq->kickfd) { > + eventfd_signal(vq->kickfd, 1); > + vq->kicked = false; > + } > + spin_unlock(&vq->kick_lock); > + > + return 0; > +} > + > +static bool vduse_dev_is_ready(struct vduse_dev *dev) > +{ > + int i; > + > + for (i = 0; i < dev->vq_num; i++) > + if (!dev->vqs[i].num_max) > + return false; > + > + return true; > +} > + > +static void vduse_dev_irq_inject(struct work_struct *work) > +{ > + struct vduse_dev *dev = container_of(work, struct vduse_dev, inject); > + > + spin_lock_irq(&dev->irq_lock); > + if (dev->config_cb.callback) > + dev->config_cb.callback(dev->config_cb.private); > + spin_unlock_irq(&dev->irq_lock); > +} > + > +static void vduse_vq_irq_inject(struct work_struct *work) > +{ > + struct vduse_virtqueue *vq = container_of(work, > + struct vduse_virtqueue, inject); > + > + spin_lock_irq(&vq->irq_lock); > + if (vq->ready && vq->cb.callback) > + vq->cb.callback(vq->cb.private); > + spin_unlock_irq(&vq->irq_lock); > +} > + > +static long vduse_dev_ioctl(struct file *file, unsigned int cmd, > + unsigned long arg) > +{ > + struct vduse_dev *dev = file->private_data; > + void __user *argp = (void __user *)arg; > + int ret; > + > + switch (cmd) { > + case VDUSE_IOTLB_GET_FD: { > + struct vduse_iotlb_entry entry; > + struct vhost_iotlb_map *map; > + struct vdpa_map_file *map_file; > + struct vduse_iova_domain *domain = dev->domain; > + struct file *f = NULL; > + > + ret = -EFAULT; > + if (copy_from_user(&entry, argp, sizeof(entry))) > + break; > + > + ret = -EINVAL; > + if (entry.start > entry.last) > + break; > + > + spin_lock(&domain->iotlb_lock); > + map = vhost_iotlb_itree_first(domain->iotlb, > + entry.start, entry.last); > + if (map) { > + map_file = (struct vdpa_map_file *)map->opaque; > + f = get_file(map_file->file); > + entry.offset = map_file->offset; > + entry.start = map->start; > + entry.last = map->last; > + entry.perm = map->perm; > + } > + spin_unlock(&domain->iotlb_lock); > + ret = -EINVAL; > + if (!f) > + break; > + > + ret = -EFAULT; > + if (copy_to_user(argp, &entry, sizeof(entry))) { > + fput(f); > + break; > + } > + ret = receive_fd(f, perm_to_file_flags(entry.perm)); > + fput(f); > + break; > + } > + case VDUSE_DEV_GET_FEATURES: Let's add a comment to explain here. E.g we just mirror what driver wrote and the drier is expected to check FEATURE_OK. > + ret = put_user(dev->driver_features, (u64 __user *)argp); > + break; > + case VDUSE_DEV_SET_CONFIG: { > + struct vduse_config_data config; > + unsigned long size = offsetof(struct vduse_config_data, > + buffer); > + > + ret = -EFAULT; > + if (copy_from_user(&config, argp, size)) > + break; > + > + ret = -EINVAL; > + if (config.length == 0 || > + config.length > dev->config_size - config.offset) > + break; > + > + ret = -EFAULT; > + if (copy_from_user(dev->config + config.offset, argp + size, > + config.length)) > + break; > + > + ret = 0; > + break; > + } > + case VDUSE_DEV_INJECT_IRQ: It's better to have "config" in the name. > + ret = 0; > + queue_work(vduse_irq_wq, &dev->inject); > + break; > + case VDUSE_VQ_SETUP: { > + struct vduse_vq_config config; > + u32 index; > + > + ret = -EFAULT; > + if (copy_from_user(&config, argp, sizeof(config))) > + break; > + > + ret = -EINVAL; > + if (config.index >= dev->vq_num) > + break; > + > + index = array_index_nospec(config.index, dev->vq_num); > + dev->vqs[index].num_max = config.max_size; > + ret = 0; > + break; > + } > + case VDUSE_VQ_GET_INFO: { > + struct vduse_vq_info vq_info; > + struct vduse_virtqueue *vq; > + u32 index; > + > + ret = -EFAULT; > + if (copy_from_user(&vq_info, argp, sizeof(vq_info))) > + break; > + > + ret = -EINVAL; > + if (vq_info.index >= dev->vq_num) > + break; > + > + index = array_index_nospec(vq_info.index, dev->vq_num); > + vq = &dev->vqs[index]; > + vq_info.desc_addr = vq->desc_addr; > + vq_info.driver_addr = vq->driver_addr; > + vq_info.device_addr = vq->device_addr; > + vq_info.num = vq->num; > + > + if (dev->driver_features & BIT_ULL(VIRTIO_F_RING_PACKED)) { > + vq_info.packed.last_avail_counter = > + vq->state.packed.last_avail_counter; > + vq_info.packed.last_avail_idx = > + vq->state.packed.last_avail_idx; > + vq_info.packed.last_used_counter = > + vq->state.packed.last_used_counter; > + vq_info.packed.last_used_idx = > + vq->state.packed.last_used_idx; > + } else > + vq_info.split.avail_index = > + vq->state.split.avail_index; > + > + vq_info.ready = vq->ready; > + > + ret = -EFAULT; > + if (copy_to_user(argp, &vq_info, sizeof(vq_info))) > + break; > + > + ret = 0; > + break; > + } > + case VDUSE_VQ_SETUP_KICKFD: { > + struct vduse_vq_eventfd eventfd; > + > + ret = -EFAULT; > + if (copy_from_user(&eventfd, argp, sizeof(eventfd))) > + break; > + > + ret = vduse_kickfd_setup(dev, &eventfd); > + break; > + } > + case VDUSE_VQ_INJECT_IRQ: { > + u32 index; > + > + ret = -EFAULT; > + if (get_user(index, (u32 __user *)argp)) > + break; > + > + ret = -EINVAL; > + if (index >= dev->vq_num) > + break; > + > + ret = 0; > + index = array_index_nospec(index, dev->vq_num); > + queue_work(vduse_irq_wq, &dev->vqs[index].inject); > + break; > + } > + default: > + ret = -ENOIOCTLCMD; > + break; > + } > + > + return ret; > +} > + > +static int vduse_dev_release(struct inode *inode, struct file *file) > +{ > + struct vduse_dev *dev = file->private_data; > + > + spin_lock(&dev->msg_lock); > + /* Make sure the inflight messages can processed after reconncection */ > + list_splice_init(&dev->recv_list, &dev->send_list); > + spin_unlock(&dev->msg_lock); > + dev->connected = false; > + > + return 0; > +} > + > +static struct vduse_dev *vduse_dev_get_from_minor(int minor) > +{ > + struct vduse_dev *dev; > + > + mutex_lock(&vduse_lock); > + dev = idr_find(&vduse_idr, minor); > + mutex_unlock(&vduse_lock); > + > + return dev; > +} > + > +static int vduse_dev_open(struct inode *inode, struct file *file) > +{ > + int ret; > + struct vduse_dev *dev = vduse_dev_get_from_minor(iminor(inode)); > + > + if (!dev) > + return -ENODEV; > + > + ret = -EBUSY; > + mutex_lock(&dev->lock); > + if (dev->connected) > + goto unlock; > + > + ret = 0; > + dev->connected = true; > + file->private_data = dev; > +unlock: > + mutex_unlock(&dev->lock); > + > + return ret; > +} > + > +static const struct file_operations vduse_dev_fops = { > + .owner = THIS_MODULE, > + .open = vduse_dev_open, > + .release = vduse_dev_release, > + .read_iter = vduse_dev_read_iter, > + .write_iter = vduse_dev_write_iter, > + .poll = vduse_dev_poll, > + .unlocked_ioctl = vduse_dev_ioctl, > + .compat_ioctl = compat_ptr_ioctl, > + .llseek = noop_llseek, > +}; > + > +static struct vduse_dev *vduse_dev_create(void) > +{ > + struct vduse_dev *dev = kzalloc(sizeof(*dev), GFP_KERNEL); > + > + if (!dev) > + return NULL; > + > + mutex_init(&dev->lock); > + spin_lock_init(&dev->msg_lock); > + INIT_LIST_HEAD(&dev->send_list); > + INIT_LIST_HEAD(&dev->recv_list); > + spin_lock_init(&dev->irq_lock); > + > + INIT_WORK(&dev->inject, vduse_dev_irq_inject); > + init_waitqueue_head(&dev->waitq); > + > + return dev; > +} > + > +static void vduse_dev_destroy(struct vduse_dev *dev) > +{ > + kfree(dev); > +} > + > +static struct vduse_dev *vduse_find_dev(const char *name) > +{ > + struct vduse_dev *dev; > + int id; > + > + idr_for_each_entry(&vduse_idr, dev, id) > + if (!strcmp(dev->name, name)) > + return dev; > + > + return NULL; > +} > + > +static int vduse_destroy_dev(char *name) > +{ > + struct vduse_dev *dev = vduse_find_dev(name); > + > + if (!dev) > + return -EINVAL; > + > + mutex_lock(&dev->lock); > + if (dev->vdev || dev->connected) { > + mutex_unlock(&dev->lock); > + return -EBUSY; > + } > + dev->connected = true; > + mutex_unlock(&dev->lock); > + > + vduse_dev_reset(dev); > + device_destroy(vduse_class, MKDEV(MAJOR(vduse_major), dev->minor)); > + idr_remove(&vduse_idr, dev->minor); > + kvfree(dev->config); > + kfree(dev->vqs); > + vduse_domain_destroy(dev->domain); > + kfree(dev->name); > + vduse_dev_destroy(dev); > + module_put(THIS_MODULE); > + > + return 0; > +} > + > +static bool device_is_allowed(u32 device_id) > +{ > + int i; > + > + for (i = 0; i < ARRAY_SIZE(allowed_device_id); i++) > + if (allowed_device_id[i] == device_id) > + return true; > + > + return false; > +} > + > +static bool features_is_valid(u64 features) > +{ > + if (!(features & (1ULL << VIRTIO_F_ACCESS_PLATFORM))) > + return false; > + > + /* Now we only support read-only configuration space */ > + if (features & (1ULL << VIRTIO_BLK_F_CONFIG_WCE)) > + return false; > + > + return true; > +} > + > +static bool vduse_validate_config(struct vduse_dev_config *config) > +{ > + if (config->bounce_size > VDUSE_MAX_BOUNCE_SIZE) > + return false; > + > + if (config->vq_align > PAGE_SIZE) > + return false; > + > + if (config->config_size > PAGE_SIZE) > + return false; Any reason for this check? > + > + if (!device_is_allowed(config->device_id)) > + return false; > + > + if (!features_is_valid(config->features)) > + return false; > + > + return true; > +} > + > +static int vduse_create_dev(struct vduse_dev_config *config, > + void *config_buf, u64 api_version) > +{ > + int i, ret; > + struct vduse_dev *dev; > + > + ret = -EEXIST; > + if (vduse_find_dev(config->name)) > + goto err; > + > + ret = -ENOMEM; > + dev = vduse_dev_create(); > + if (!dev) > + goto err; > + > + dev->api_version = api_version; > + dev->device_features = config->features; > + dev->device_id = config->device_id; > + dev->vendor_id = config->vendor_id; > + dev->name = kstrdup(config->name, GFP_KERNEL); > + if (!dev->name) > + goto err_str; > + > + dev->domain = vduse_domain_create(VDUSE_IOVA_SIZE - 1, > + config->bounce_size); > + if (!dev->domain) > + goto err_domain; > + > + dev->config = config_buf; > + dev->config_size = config->config_size; > + dev->vq_align = config->vq_align; > + dev->vq_num = config->vq_num; > + dev->vqs = kcalloc(dev->vq_num, sizeof(*dev->vqs), GFP_KERNEL); > + if (!dev->vqs) > + goto err_vqs; > + > + for (i = 0; i < dev->vq_num; i++) { > + dev->vqs[i].index = i; > + INIT_WORK(&dev->vqs[i].inject, vduse_vq_irq_inject); > + spin_lock_init(&dev->vqs[i].kick_lock); > + spin_lock_init(&dev->vqs[i].irq_lock); > + } > + > + ret = idr_alloc(&vduse_idr, dev, 1, VDUSE_DEV_MAX, GFP_KERNEL); > + if (ret < 0) > + goto err_idr; > + > + dev->minor = ret; > + dev->dev = device_create(vduse_class, NULL, > + MKDEV(MAJOR(vduse_major), dev->minor), > + NULL, "%s", config->name); > + if (IS_ERR(dev->dev)) { > + ret = PTR_ERR(dev->dev); > + goto err_dev; > + } > + __module_get(THIS_MODULE); > + > + return 0; > +err_dev: > + idr_remove(&vduse_idr, dev->minor); > +err_idr: > + kfree(dev->vqs); > +err_vqs: > + vduse_domain_destroy(dev->domain); > +err_domain: > + kfree(dev->name); > +err_str: > + vduse_dev_destroy(dev); > +err: > + kvfree(config_buf); > + return ret; > +} > + > +static long vduse_ioctl(struct file *file, unsigned int cmd, > + unsigned long arg) > +{ > + int ret; > + void __user *argp = (void __user *)arg; > + struct vduse_control *control = file->private_data; > + > + mutex_lock(&vduse_lock); > + switch (cmd) { > + case VDUSE_GET_API_VERSION: > + ret = put_user(control->api_version, (u64 __user *)argp); > + break; > + case VDUSE_SET_API_VERSION: { > + u64 api_version; > + > + ret = -EFAULT; > + if (get_user(api_version, (u64 __user *)argp)) > + break; > + > + ret = -EINVAL; > + if (api_version > VDUSE_API_VERSION) > + break; > + > + ret = 0; > + control->api_version = api_version; > + break; > + } > + case VDUSE_CREATE_DEV: { > + struct vduse_dev_config config; > + unsigned long size = offsetof(struct vduse_dev_config, config); > + void *buf; > + > + ret = -EFAULT; > + if (copy_from_user(&config, argp, size)) > + break; > + > + ret = -EINVAL; > + if (vduse_validate_config(&config) == false) > + break; > + > + buf = vmemdup_user(argp + size, config.config_size); > + if (IS_ERR(buf)) { > + ret = PTR_ERR(buf); > + break; > + } > + config.name[VDUSE_NAME_MAX - 1] = '\0'; > + ret = vduse_create_dev(&config, buf, control->api_version); > + break; > + } > + case VDUSE_DESTROY_DEV: { > + char name[VDUSE_NAME_MAX]; > + > + ret = -EFAULT; > + if (copy_from_user(name, argp, VDUSE_NAME_MAX)) > + break; > + > + name[VDUSE_NAME_MAX - 1] = '\0'; > + ret = vduse_destroy_dev(name); > + break; > + } > + default: > + ret = -EINVAL; > + break; > + } > + mutex_unlock(&vduse_lock); > + > + return ret; > +} > + > +static int vduse_release(struct inode *inode, struct file *file) > +{ > + struct vduse_control *control = file->private_data; > + > + kfree(control); > + return 0; > +} > + > +static int vduse_open(struct inode *inode, struct file *file) > +{ > + struct vduse_control *control; > + > + control = kmalloc(sizeof(struct vduse_control), GFP_KERNEL); > + if (!control) > + return -ENOMEM; > + > + control->api_version = VDUSE_API_VERSION; > + file->private_data = control; > + > + return 0; > +} > + > +static const struct file_operations vduse_ctrl_fops = { > + .owner = THIS_MODULE, > + .open = vduse_open, > + .release = vduse_release, > + .unlocked_ioctl = vduse_ioctl, > + .compat_ioctl = compat_ptr_ioctl, > + .llseek = noop_llseek, > +}; > + > +static char *vduse_devnode(struct device *dev, umode_t *mode) > +{ > + return kasprintf(GFP_KERNEL, "vduse/%s", dev_name(dev)); > +} > + > +static void vduse_mgmtdev_release(struct device *dev) > +{ > +} > + > +static struct device vduse_mgmtdev = { > + .init_name = "vduse", > + .release = vduse_mgmtdev_release, > +}; > + > +static struct vdpa_mgmt_dev mgmt_dev; > + > +static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name) > +{ > + struct vduse_vdpa *vdev; > + int ret; > + > + if (dev->vdev) > + return -EEXIST; > + > + vdev = vdpa_alloc_device(struct vduse_vdpa, vdpa, dev->dev, > + &vduse_vdpa_config_ops, name, true); > + if (!vdev) > + return -ENOMEM; > + > + dev->vdev = vdev; > + vdev->dev = dev; > + vdev->vdpa.dev.dma_mask = &vdev->vdpa.dev.coherent_dma_mask; > + ret = dma_set_mask_and_coherent(&vdev->vdpa.dev, DMA_BIT_MASK(64)); > + if (ret) { > + put_device(&vdev->vdpa.dev); > + return ret; > + } > + set_dma_ops(&vdev->vdpa.dev, &vduse_dev_dma_ops); > + vdev->vdpa.dma_dev = &vdev->vdpa.dev; > + vdev->vdpa.mdev = &mgmt_dev; > + > + return 0; > +} > + > +static int vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name) > +{ > + struct vduse_dev *dev; > + int ret; > + > + mutex_lock(&vduse_lock); > + dev = vduse_find_dev(name); > + if (!dev || !vduse_dev_is_ready(dev)) { > + mutex_unlock(&vduse_lock); > + return -EINVAL; > + } > + ret = vduse_dev_init_vdpa(dev, name); > + mutex_unlock(&vduse_lock); > + if (ret) > + return ret; > + > + ret = _vdpa_register_device(&dev->vdev->vdpa, dev->vq_num); > + if (ret) { > + put_device(&dev->vdev->vdpa.dev); > + return ret; > + } > + > + return 0; > +} > + > +static void vdpa_dev_del(struct vdpa_mgmt_dev *mdev, struct vdpa_device *dev) > +{ > + _vdpa_unregister_device(dev); > +} > + > +static const struct vdpa_mgmtdev_ops vdpa_dev_mgmtdev_ops = { > + .dev_add = vdpa_dev_add, > + .dev_del = vdpa_dev_del, > +}; > + > +static struct virtio_device_id id_table[] = { > + { VIRTIO_ID_BLOCK, VIRTIO_DEV_ANY_ID }, > + { 0 }, > +}; > + > +static struct vdpa_mgmt_dev mgmt_dev = { > + .device = &vduse_mgmtdev, > + .id_table = id_table, > + .ops = &vdpa_dev_mgmtdev_ops, > +}; > + > +static int vduse_mgmtdev_init(void) > +{ > + int ret; > + > + ret = device_register(&vduse_mgmtdev); > + if (ret) > + return ret; > + > + ret = vdpa_mgmtdev_register(&mgmt_dev); > + if (ret) > + goto err; > + > + return 0; > +err: > + device_unregister(&vduse_mgmtdev); > + return ret; > +} > + > +static void vduse_mgmtdev_exit(void) > +{ > + vdpa_mgmtdev_unregister(&mgmt_dev); > + device_unregister(&vduse_mgmtdev); > +} > + > +static int vduse_init(void) > +{ > + int ret; > + struct device *dev; > + > + vduse_class = class_create(THIS_MODULE, "vduse"); > + if (IS_ERR(vduse_class)) > + return PTR_ERR(vduse_class); > + > + vduse_class->devnode = vduse_devnode; > + > + ret = alloc_chrdev_region(&vduse_major, 0, VDUSE_DEV_MAX, "vduse"); > + if (ret) > + goto err_chardev_region; > + > + /* /dev/vduse/control */ > + cdev_init(&vduse_ctrl_cdev, &vduse_ctrl_fops); > + vduse_ctrl_cdev.owner = THIS_MODULE; > + ret = cdev_add(&vduse_ctrl_cdev, vduse_major, 1); > + if (ret) > + goto err_ctrl_cdev; > + > + dev = device_create(vduse_class, NULL, vduse_major, NULL, "control"); > + if (IS_ERR(dev)) { > + ret = PTR_ERR(dev); > + goto err_device; > + } > + > + /* /dev/vduse/$DEVICE */ > + cdev_init(&vduse_cdev, &vduse_dev_fops); > + vduse_cdev.owner = THIS_MODULE; > + ret = cdev_add(&vduse_cdev, MKDEV(MAJOR(vduse_major), 1), > + VDUSE_DEV_MAX - 1); > + if (ret) > + goto err_cdev; > + > + vduse_irq_wq = alloc_workqueue("vduse-irq", > + WQ_HIGHPRI | WQ_SYSFS | WQ_UNBOUND, 0); > + if (!vduse_irq_wq) > + goto err_wq; > + > + ret = vduse_domain_init(); > + if (ret) > + goto err_domain; > + > + ret = vduse_mgmtdev_init(); > + if (ret) > + goto err_mgmtdev; > + > + return 0; > +err_mgmtdev: > + vduse_domain_exit(); > +err_domain: > + destroy_workqueue(vduse_irq_wq); > +err_wq: > + cdev_del(&vduse_cdev); > +err_cdev: > + device_destroy(vduse_class, vduse_major); > +err_device: > + cdev_del(&vduse_ctrl_cdev); > +err_ctrl_cdev: > + unregister_chrdev_region(vduse_major, VDUSE_DEV_MAX); > +err_chardev_region: > + class_destroy(vduse_class); > + return ret; > +} > +module_init(vduse_init); > + > +static void vduse_exit(void) > +{ > + vduse_mgmtdev_exit(); > + vduse_domain_exit(); > + destroy_workqueue(vduse_irq_wq); > + cdev_del(&vduse_cdev); > + device_destroy(vduse_class, vduse_major); > + cdev_del(&vduse_ctrl_cdev); > + unregister_chrdev_region(vduse_major, VDUSE_DEV_MAX); > + class_destroy(vduse_class); > +} > +module_exit(vduse_exit); > + > +MODULE_LICENSE(DRV_LICENSE); > +MODULE_AUTHOR(DRV_AUTHOR); > +MODULE_DESCRIPTION(DRV_DESC); > diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h > new file mode 100644 > index 000000000000..585fcce398af > --- /dev/null > +++ b/include/uapi/linux/vduse.h > @@ -0,0 +1,221 @@ > +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ > +#ifndef _UAPI_VDUSE_H_ > +#define _UAPI_VDUSE_H_ > + > +#include <linux/types.h> > + > +#define VDUSE_BASE 0x81 > + > +/* The ioctls for control device (/dev/vduse/control) */ > + > +#define VDUSE_API_VERSION 0 > + > +/* > + * Get the version of VDUSE API that kernel supported (VDUSE_API_VERSION). > + * This is used for future extension. > + */ > +#define VDUSE_GET_API_VERSION _IOR(VDUSE_BASE, 0x00, __u64) > + > +/* Set the version of VDUSE API that userspace supported. */ > +#define VDUSE_SET_API_VERSION _IOW(VDUSE_BASE, 0x01, __u64) > + > +/* > + * The basic configuration of a VDUSE device, which is used by > + * VDUSE_CREATE_DEV ioctl to create a VDUSE device. > + */ > +struct vduse_dev_config { > +#define VDUSE_NAME_MAX 256 > + char name[VDUSE_NAME_MAX]; /* vduse device name, needs to be NUL terminated */ > + __u32 vendor_id; /* virtio vendor id */ > + __u32 device_id; /* virtio device id */ > + __u64 features; /* virtio features */ > + __u64 bounce_size; /* the size of bounce buffer for data transfer */ > + __u32 vq_num; /* the number of virtqueues */ > + __u32 vq_align; /* the allocation alignment of virtqueue's metadata */ > + __u32 reserved[15]; /* for future use */ > + __u32 config_size; /* the size of the configuration space */ > + __u8 config[0]; /* the buffer of the configuration space */ > +}; > + > +/* Create a VDUSE device which is represented by a char device (/dev/vduse/$NAME) */ > +#define VDUSE_CREATE_DEV _IOW(VDUSE_BASE, 0x02, struct vduse_dev_config) > + > +/* > + * Destroy a VDUSE device. Make sure there are no more references > + * to the char device (/dev/vduse/$NAME). > + */ > +#define VDUSE_DESTROY_DEV _IOW(VDUSE_BASE, 0x03, char[VDUSE_NAME_MAX]) > + > +/* The ioctls for VDUSE device (/dev/vduse/$NAME) */ > + > +/* > + * The information of one IOVA region, which is retrieved from > + * VDUSE_IOTLB_GET_FD ioctl. > + */ > +struct vduse_iotlb_entry { > + __u64 offset; /* the mmap offset on returned file descriptor */ > + __u64 start; /* start of the IOVA range: [start, last] */ > + __u64 last; /* last of the IOVA range: [start, last] */ > +#define VDUSE_ACCESS_RO 0x1 > +#define VDUSE_ACCESS_WO 0x2 > +#define VDUSE_ACCESS_RW 0x3 > + __u8 perm; /* access permission of this region */ > +}; > + > +/* > + * Find the first IOVA region that overlaps with the range [start, last] > + * and return the corresponding file descriptor. Return -EINVAL means the > + * IOVA region doesn't exist. Caller should set start and last fields. > + */ > +#define VDUSE_IOTLB_GET_FD _IOWR(VDUSE_BASE, 0x10, struct vduse_iotlb_entry) > + > +/* > + * Get the negotiated virtio features. It's a subset of the features in > + * struct vduse_dev_config which can be accepted by virtio driver. It's > + * only valid after FEATURES_OK status bit is set. > + */ > +#define VDUSE_DEV_GET_FEATURES _IOR(VDUSE_BASE, 0x11, __u64) > + > +/* > + * The information that is used by VDUSE_DEV_SET_CONFIG ioctl to update > + * device configuration space. > + */ > +struct vduse_config_data { > + __u32 offset; /* offset from the beginning of configuration space */ > + __u32 length; /* the length to write to configuration space */ > + __u8 buffer[0]; /* buffer used to write from */ > +}; > + > +/* Set device configuration space */ > +#define VDUSE_DEV_SET_CONFIG _IOW(VDUSE_BASE, 0x12, struct vduse_config_data) > + > +/* > + * Inject a config interrupt. It's usually used to notify virtio driver > + * that device configuration space has changed. > + */ > +#define VDUSE_DEV_INJECT_IRQ _IO(VDUSE_BASE, 0x13) > + > +/* > + * The basic configuration of a virtqueue, which is used by > + * VDUSE_VQ_SETUP ioctl to setup a virtqueue. > + */ > +struct vduse_vq_config { > + __u32 index; /* virtqueue index */ > + __u16 max_size; /* the max size of virtqueue */ > +}; > + > +/* > + * Setup the specified virtqueue. Make sure all virtqueues have been > + * configured before the device is attached to vDPA bus. > + */ > +#define VDUSE_VQ_SETUP _IOW(VDUSE_BASE, 0x14, struct vduse_vq_config) > + > +struct vduse_vq_state_split { > + __u16 avail_index; /* available index */ > +}; > + > +struct vduse_vq_state_packed { > + __u16 last_avail_counter:1; /* last driver ring wrap counter observed by device */ > + __u16 last_avail_idx:15; /* device available index */ > + __u16 last_used_counter:1; /* device ring wrap counter */ > + __u16 last_used_idx:15; /* used index */ > +}; > + > +/* > + * The information of a virtqueue, which is retrieved from > + * VDUSE_VQ_GET_INFO ioctl. > + */ > +struct vduse_vq_info { > + __u32 index; /* virtqueue index */ > + __u32 num; /* the size of virtqueue */ > + __u64 desc_addr; /* address of desc area */ > + __u64 driver_addr; /* address of driver area */ > + __u64 device_addr; /* address of device area */ > + union { > + struct vduse_vq_state_split split; /* split virtqueue state */ > + struct vduse_vq_state_packed packed; /* packed virtqueue state */ > + }; > + __u8 ready; /* ready status of virtqueue */ > +}; > + > +/* Get the specified virtqueue's information. Caller should set index field. */ > +#define VDUSE_VQ_GET_INFO _IOWR(VDUSE_BASE, 0x15, struct vduse_vq_info) > + > +/* > + * The eventfd configuration for the specified virtqueue. It's used by > + * VDUSE_VQ_SETUP_KICKFD ioctl to setup kick eventfd. > + */ > +struct vduse_vq_eventfd { > + __u32 index; /* virtqueue index */ > +#define VDUSE_EVENTFD_DEASSIGN -1 > + int fd; /* eventfd, -1 means de-assigning the eventfd */ > +}; > + > +/* > + * Setup kick eventfd for specified virtqueue. The kick eventfd is used > + * by VDUSE kernel module to notify userspace to consume the avail vring. > + */ > +#define VDUSE_VQ_SETUP_KICKFD _IOW(VDUSE_BASE, 0x16, struct vduse_vq_eventfd) > + > +/* > + * Inject an interrupt for specific virtqueue. It's used to notify virtio driver > + * to consume the used vring. > + */ > +#define VDUSE_VQ_INJECT_IRQ _IOW(VDUSE_BASE, 0x17, __u32) > + > +/* The control messages definition for read/write on /dev/vduse/$NAME */ > + > +enum vduse_req_type { > + /* Get the state for specified virtqueue from userspace */ > + VDUSE_GET_VQ_STATE, > + /* Set the device status */ > + VDUSE_SET_STATUS, > + /* > + * Notify userspace to update the memory mapping for specified > + * IOVA range via VDUSE_IOTLB_GET_FD ioctl > + */ > + VDUSE_UPDATE_IOTLB, > +}; > + > +struct vduse_vq_state { > + __u32 index; /* virtqueue index */ > + union { > + struct vduse_vq_state_split split; /* split virtqueue state */ > + struct vduse_vq_state_packed packed; /* packed virtqueue state */ > + }; > +}; > + > +struct vduse_dev_status { > + __u8 status; /* device status */ > +}; > + > +struct vduse_iova_range { > + __u64 start; /* start of the IOVA range: [start, end] */ > + __u64 last; /* last of the IOVA range: [start, end] */ > +}; > + > +struct vduse_dev_request { > + __u32 type; /* request type */ > + __u32 request_id; /* request id */ > + __u32 reserved[2]; /* for future use */ > + union { > + struct vduse_vq_state vq_state; /* virtqueue state, only use index */ > + struct vduse_dev_status s; /* device status */ > + struct vduse_iova_range iova; /* IOVA range for updating */ > + __u32 padding[16]; /* padding */ > + }; > +}; > + > +struct vduse_dev_response { > + __u32 request_id; /* corresponding request id */ > +#define VDUSE_REQ_RESULT_OK 0x00 > +#define VDUSE_REQ_RESULT_FAILED 0x01 > + __u32 result; /* the result of request */ > + __u32 reserved[2]; /* for future use */ > + union { > + struct vduse_vq_state vq_state; /* virtqueue state */ > + __u32 padding[16]; /* padding */ > + }; > +}; > + > +#endif /* _UAPI_VDUSE_H_ */ _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace 2021-07-14 5:45 ` Jason Wang @ 2021-07-14 5:54 ` Michael S. Tsirkin 2021-07-14 6:02 ` Jason Wang [not found] ` <CACycT3uh+wUeDM1H7JiCJTMeCVCBngURGKeXD-h+meekNNwiQw@mail.gmail.com> 1 sibling, 1 reply; 18+ messages in thread From: Michael S. Tsirkin @ 2021-07-14 5:54 UTC (permalink / raw) To: Jason Wang Cc: kvm, virtualization, christian.brauner, corbet, joro, willy, hch, Xie Yongji, dan.carpenter, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, gregkh, rdunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, mika.penttila On Wed, Jul 14, 2021 at 01:45:39PM +0800, Jason Wang wrote: > > +static int vduse_dev_msg_sync(struct vduse_dev *dev, > > + struct vduse_dev_msg *msg) > > +{ > > + int ret; > > + > > + init_waitqueue_head(&msg->waitq); > > + spin_lock(&dev->msg_lock); > > + msg->req.request_id = dev->msg_unique++; > > + vduse_enqueue_msg(&dev->send_list, msg); > > + wake_up(&dev->waitq); > > + spin_unlock(&dev->msg_lock); > > + > > + wait_event_killable_timeout(msg->waitq, msg->completed, > > + VDUSE_REQUEST_TIMEOUT * HZ); > > + spin_lock(&dev->msg_lock); > > + if (!msg->completed) { > > + list_del(&msg->list); > > + msg->resp.result = VDUSE_REQ_RESULT_FAILED; > > + } > > + ret = (msg->resp.result == VDUSE_REQ_RESULT_OK) ? 0 : -EIO; > > > I think we should mark the device as malfunction when there is a timeout and > forbid any userspace operations except for the destroy aftwards for safety. This looks like if one tried to run gdb on the program the behaviour will change completely because kernel wants it to respond within specific time. Looks like a receipe for heisenbugs. Let's not build interfaces with arbitrary timeouts like that. Interruptible wait exists for this very reason. Let userspace have its own code to set and use timers. This does mean that userspace will likely have to change a bit to support this driver, such is life. -- MST _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace 2021-07-14 5:54 ` Michael S. Tsirkin @ 2021-07-14 6:02 ` Jason Wang 2021-07-14 6:47 ` Greg KH 0 siblings, 1 reply; 18+ messages in thread From: Jason Wang @ 2021-07-14 6:02 UTC (permalink / raw) To: Michael S. Tsirkin Cc: kvm, virtualization, christian.brauner, corbet, joro, willy, hch, Xie Yongji, dan.carpenter, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, gregkh, rdunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, mika.penttila 在 2021/7/14 下午1:54, Michael S. Tsirkin 写道: > On Wed, Jul 14, 2021 at 01:45:39PM +0800, Jason Wang wrote: >>> +static int vduse_dev_msg_sync(struct vduse_dev *dev, >>> + struct vduse_dev_msg *msg) >>> +{ >>> + int ret; >>> + >>> + init_waitqueue_head(&msg->waitq); >>> + spin_lock(&dev->msg_lock); >>> + msg->req.request_id = dev->msg_unique++; >>> + vduse_enqueue_msg(&dev->send_list, msg); >>> + wake_up(&dev->waitq); >>> + spin_unlock(&dev->msg_lock); >>> + >>> + wait_event_killable_timeout(msg->waitq, msg->completed, >>> + VDUSE_REQUEST_TIMEOUT * HZ); >>> + spin_lock(&dev->msg_lock); >>> + if (!msg->completed) { >>> + list_del(&msg->list); >>> + msg->resp.result = VDUSE_REQ_RESULT_FAILED; >>> + } >>> + ret = (msg->resp.result == VDUSE_REQ_RESULT_OK) ? 0 : -EIO; >> >> I think we should mark the device as malfunction when there is a timeout and >> forbid any userspace operations except for the destroy aftwards for safety. > This looks like if one tried to run gdb on the program the behaviour > will change completely because kernel wants it to respond within > specific time. Looks like a receipe for heisenbugs. > > Let's not build interfaces with arbitrary timeouts like that. > Interruptible wait exists for this very reason. The problem is. Do we want userspace program like modprobe to be stuck for indefinite time and expect the administrator to kill that? Thanks > Let userspace have its > own code to set and use timers. This does mean that userspace will > likely have to change a bit to support this driver, such is life. > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace 2021-07-14 6:02 ` Jason Wang @ 2021-07-14 6:47 ` Greg KH 2021-07-14 8:56 ` Jason Wang 0 siblings, 1 reply; 18+ messages in thread From: Greg KH @ 2021-07-14 6:47 UTC (permalink / raw) To: Jason Wang Cc: kvm, Michael S. Tsirkin, virtualization, christian.brauner, corbet, joro, willy, hch, Xie Yongji, dan.carpenter, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, netdev, rdunlap, linux-kernel, iommu, bcrl, linux-fsdevel, mika.penttila On Wed, Jul 14, 2021 at 02:02:50PM +0800, Jason Wang wrote: > > 在 2021/7/14 下午1:54, Michael S. Tsirkin 写道: > > On Wed, Jul 14, 2021 at 01:45:39PM +0800, Jason Wang wrote: > > > > +static int vduse_dev_msg_sync(struct vduse_dev *dev, > > > > + struct vduse_dev_msg *msg) > > > > +{ > > > > + int ret; > > > > + > > > > + init_waitqueue_head(&msg->waitq); > > > > + spin_lock(&dev->msg_lock); > > > > + msg->req.request_id = dev->msg_unique++; > > > > + vduse_enqueue_msg(&dev->send_list, msg); > > > > + wake_up(&dev->waitq); > > > > + spin_unlock(&dev->msg_lock); > > > > + > > > > + wait_event_killable_timeout(msg->waitq, msg->completed, > > > > + VDUSE_REQUEST_TIMEOUT * HZ); > > > > + spin_lock(&dev->msg_lock); > > > > + if (!msg->completed) { > > > > + list_del(&msg->list); > > > > + msg->resp.result = VDUSE_REQ_RESULT_FAILED; > > > > + } > > > > + ret = (msg->resp.result == VDUSE_REQ_RESULT_OK) ? 0 : -EIO; > > > > > > I think we should mark the device as malfunction when there is a timeout and > > > forbid any userspace operations except for the destroy aftwards for safety. > > This looks like if one tried to run gdb on the program the behaviour > > will change completely because kernel wants it to respond within > > specific time. Looks like a receipe for heisenbugs. > > > > Let's not build interfaces with arbitrary timeouts like that. > > Interruptible wait exists for this very reason. > > > The problem is. Do we want userspace program like modprobe to be stuck for > indefinite time and expect the administrator to kill that? Why would modprobe be stuck for forever? Is this on the module probe path? _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace 2021-07-14 6:47 ` Greg KH @ 2021-07-14 8:56 ` Jason Wang 0 siblings, 0 replies; 18+ messages in thread From: Jason Wang @ 2021-07-14 8:56 UTC (permalink / raw) To: Greg KH Cc: kvm, Michael S. Tsirkin, virtualization, christian.brauner, corbet, joro, willy, hch, Xie Yongji, dan.carpenter, xiaodong.liu, viro, stefanha, songmuchun, axboe, zhe.he, netdev, rdunlap, linux-kernel, iommu, bcrl, linux-fsdevel, mika.penttila 在 2021/7/14 下午2:47, Greg KH 写道: > On Wed, Jul 14, 2021 at 02:02:50PM +0800, Jason Wang wrote: >> 在 2021/7/14 下午1:54, Michael S. Tsirkin 写道: >>> On Wed, Jul 14, 2021 at 01:45:39PM +0800, Jason Wang wrote: >>>>> +static int vduse_dev_msg_sync(struct vduse_dev *dev, >>>>> + struct vduse_dev_msg *msg) >>>>> +{ >>>>> + int ret; >>>>> + >>>>> + init_waitqueue_head(&msg->waitq); >>>>> + spin_lock(&dev->msg_lock); >>>>> + msg->req.request_id = dev->msg_unique++; >>>>> + vduse_enqueue_msg(&dev->send_list, msg); >>>>> + wake_up(&dev->waitq); >>>>> + spin_unlock(&dev->msg_lock); >>>>> + >>>>> + wait_event_killable_timeout(msg->waitq, msg->completed, >>>>> + VDUSE_REQUEST_TIMEOUT * HZ); >>>>> + spin_lock(&dev->msg_lock); >>>>> + if (!msg->completed) { >>>>> + list_del(&msg->list); >>>>> + msg->resp.result = VDUSE_REQ_RESULT_FAILED; >>>>> + } >>>>> + ret = (msg->resp.result == VDUSE_REQ_RESULT_OK) ? 0 : -EIO; >>>> I think we should mark the device as malfunction when there is a timeout and >>>> forbid any userspace operations except for the destroy aftwards for safety. >>> This looks like if one tried to run gdb on the program the behaviour >>> will change completely because kernel wants it to respond within >>> specific time. Looks like a receipe for heisenbugs. >>> >>> Let's not build interfaces with arbitrary timeouts like that. >>> Interruptible wait exists for this very reason. >> >> The problem is. Do we want userspace program like modprobe to be stuck for >> indefinite time and expect the administrator to kill that? > Why would modprobe be stuck for forever? > > Is this on the module probe path? Yes, it is called in the device probing path where the kernel forwards the device configuration request to userspace and wait for its response. If it turns out to be tricky, we can implement the whole device inside the kernel and leave only the datapath in the userspace (as what TUN did). Thanks > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <CACycT3uh+wUeDM1H7JiCJTMeCVCBngURGKeXD-h+meekNNwiQw@mail.gmail.com>]
* Re: [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace [not found] ` <CACycT3uh+wUeDM1H7JiCJTMeCVCBngURGKeXD-h+meekNNwiQw@mail.gmail.com> @ 2021-07-14 9:12 ` Jason Wang [not found] ` <CACycT3vTyR=+6xOJyXCu_bGAKcz4Fx3bA25WfdBjhxJ6MOvLzw@mail.gmail.com> 0 siblings, 1 reply; 18+ messages in thread From: Jason Wang @ 2021-07-14 9:12 UTC (permalink / raw) To: Yongji Xie Cc: kvm, Michael S. Tsirkin, virtualization, Christian Brauner, Jonathan Corbet, joro, Matthew Wilcox, Christoph Hellwig, Dan Carpenter, Liu Xiaodong, Al Viro, Stefan Hajnoczi, songmuchun, Jens Axboe, He Zhe, Greg KH, Randy Dunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, Mika Penttilä 在 2021/7/14 下午2:49, Yongji Xie 写道: > On Wed, Jul 14, 2021 at 1:45 PM Jason Wang <jasowang@redhat.com> wrote: >> >> 在 2021/7/13 下午4:46, Xie Yongji 写道: >>> This VDUSE driver enables implementing software-emulated vDPA >>> devices in userspace. The vDPA device is created by >>> ioctl(VDUSE_CREATE_DEV) on /dev/vduse/control. Then a char device >>> interface (/dev/vduse/$NAME) is exported to userspace for device >>> emulation. >>> >>> In order to make the device emulation more secure, the device's >>> control path is handled in kernel. A message mechnism is introduced >>> to forward some dataplane related control messages to userspace. >>> >>> And in the data path, the DMA buffer will be mapped into userspace >>> address space through different ways depending on the vDPA bus to >>> which the vDPA device is attached. In virtio-vdpa case, the MMU-based >>> IOMMU driver is used to achieve that. And in vhost-vdpa case, the >>> DMA buffer is reside in a userspace memory region which can be shared >>> to the VDUSE userspace processs via transferring the shmfd. >>> >>> For more details on VDUSE design and usage, please see the follow-on >>> Documentation commit. >>> >>> Signed-off-by: Xie Yongji <xieyongji@bytedance.com> >>> --- >>> Documentation/userspace-api/ioctl/ioctl-number.rst | 1 + >>> drivers/vdpa/Kconfig | 10 + >>> drivers/vdpa/Makefile | 1 + >>> drivers/vdpa/vdpa_user/Makefile | 5 + >>> drivers/vdpa/vdpa_user/vduse_dev.c | 1502 ++++++++++++++++++++ >>> include/uapi/linux/vduse.h | 221 +++ >>> 6 files changed, 1740 insertions(+) >>> create mode 100644 drivers/vdpa/vdpa_user/Makefile >>> create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c >>> create mode 100644 include/uapi/linux/vduse.h >>> >>> diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst >>> index 1409e40e6345..293ca3aef358 100644 >>> --- a/Documentation/userspace-api/ioctl/ioctl-number.rst >>> +++ b/Documentation/userspace-api/ioctl/ioctl-number.rst >>> @@ -300,6 +300,7 @@ Code Seq# Include File Comments >>> 'z' 10-4F drivers/s390/crypto/zcrypt_api.h conflict! >>> '|' 00-7F linux/media.h >>> 0x80 00-1F linux/fb.h >>> +0x81 00-1F linux/vduse.h >>> 0x89 00-06 arch/x86/include/asm/sockios.h >>> 0x89 0B-DF linux/sockios.h >>> 0x89 E0-EF linux/sockios.h SIOCPROTOPRIVATE range >>> diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig >>> index a503c1b2bfd9..6e23bce6433a 100644 >>> --- a/drivers/vdpa/Kconfig >>> +++ b/drivers/vdpa/Kconfig >>> @@ -33,6 +33,16 @@ config VDPA_SIM_BLOCK >>> vDPA block device simulator which terminates IO request in a >>> memory buffer. >>> >>> +config VDPA_USER >>> + tristate "VDUSE (vDPA Device in Userspace) support" >>> + depends on EVENTFD && MMU && HAS_DMA >>> + select DMA_OPS >>> + select VHOST_IOTLB >>> + select IOMMU_IOVA >>> + help >>> + With VDUSE it is possible to emulate a vDPA Device >>> + in a userspace program. >>> + >>> config IFCVF >>> tristate "Intel IFC VF vDPA driver" >>> depends on PCI_MSI >>> diff --git a/drivers/vdpa/Makefile b/drivers/vdpa/Makefile >>> index 67fe7f3d6943..f02ebed33f19 100644 >>> --- a/drivers/vdpa/Makefile >>> +++ b/drivers/vdpa/Makefile >>> @@ -1,6 +1,7 @@ >>> # SPDX-License-Identifier: GPL-2.0 >>> obj-$(CONFIG_VDPA) += vdpa.o >>> obj-$(CONFIG_VDPA_SIM) += vdpa_sim/ >>> +obj-$(CONFIG_VDPA_USER) += vdpa_user/ >>> obj-$(CONFIG_IFCVF) += ifcvf/ >>> obj-$(CONFIG_MLX5_VDPA) += mlx5/ >>> obj-$(CONFIG_VP_VDPA) += virtio_pci/ >>> diff --git a/drivers/vdpa/vdpa_user/Makefile b/drivers/vdpa/vdpa_user/Makefile >>> new file mode 100644 >>> index 000000000000..260e0b26af99 >>> --- /dev/null >>> +++ b/drivers/vdpa/vdpa_user/Makefile >>> @@ -0,0 +1,5 @@ >>> +# SPDX-License-Identifier: GPL-2.0 >>> + >>> +vduse-y := vduse_dev.o iova_domain.o >>> + >>> +obj-$(CONFIG_VDPA_USER) += vduse.o >>> diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c >>> new file mode 100644 >>> index 000000000000..c994a4a4660c >>> --- /dev/null >>> +++ b/drivers/vdpa/vdpa_user/vduse_dev.c >>> @@ -0,0 +1,1502 @@ >>> +// SPDX-License-Identifier: GPL-2.0-only >>> +/* >>> + * VDUSE: vDPA Device in Userspace >>> + * >>> + * Copyright (C) 2020-2021 Bytedance Inc. and/or its affiliates. All rights reserved. >>> + * >>> + * Author: Xie Yongji <xieyongji@bytedance.com> >>> + * >>> + */ >>> + >>> +#include <linux/init.h> >>> +#include <linux/module.h> >>> +#include <linux/cdev.h> >>> +#include <linux/device.h> >>> +#include <linux/eventfd.h> >>> +#include <linux/slab.h> >>> +#include <linux/wait.h> >>> +#include <linux/dma-map-ops.h> >>> +#include <linux/poll.h> >>> +#include <linux/file.h> >>> +#include <linux/uio.h> >>> +#include <linux/vdpa.h> >>> +#include <linux/nospec.h> >>> +#include <uapi/linux/vduse.h> >>> +#include <uapi/linux/vdpa.h> >>> +#include <uapi/linux/virtio_config.h> >>> +#include <uapi/linux/virtio_ids.h> >>> +#include <uapi/linux/virtio_blk.h> >>> +#include <linux/mod_devicetable.h> >>> + >>> +#include "iova_domain.h" >>> + >>> +#define DRV_AUTHOR "Yongji Xie <xieyongji@bytedance.com>" >>> +#define DRV_DESC "vDPA Device in Userspace" >>> +#define DRV_LICENSE "GPL v2" >>> + >>> +#define VDUSE_DEV_MAX (1U << MINORBITS) >>> +#define VDUSE_MAX_BOUNCE_SIZE (64 * 1024 * 1024) >>> +#define VDUSE_IOVA_SIZE (128 * 1024 * 1024) >>> +#define VDUSE_REQUEST_TIMEOUT 30 >>> + >>> +struct vduse_virtqueue { >>> + u16 index; >>> + u16 num_max; >>> + u32 num; >>> + u64 desc_addr; >>> + u64 driver_addr; >>> + u64 device_addr; >>> + struct vdpa_vq_state state; >>> + bool ready; >>> + bool kicked; >>> + spinlock_t kick_lock; >>> + spinlock_t irq_lock; >>> + struct eventfd_ctx *kickfd; >>> + struct vdpa_callback cb; >>> + struct work_struct inject; >>> +}; >>> + >>> +struct vduse_dev; >>> + >>> +struct vduse_vdpa { >>> + struct vdpa_device vdpa; >>> + struct vduse_dev *dev; >>> +}; >>> + >>> +struct vduse_dev { >>> + struct vduse_vdpa *vdev; >>> + struct device *dev; >>> + struct vduse_virtqueue *vqs; >>> + struct vduse_iova_domain *domain; >>> + char *name; >>> + struct mutex lock; >>> + spinlock_t msg_lock; >>> + u64 msg_unique; >>> + wait_queue_head_t waitq; >>> + struct list_head send_list; >>> + struct list_head recv_list; >>> + struct vdpa_callback config_cb; >>> + struct work_struct inject; >>> + spinlock_t irq_lock; >>> + int minor; >>> + bool connected; >>> + u64 api_version; >>> + u64 device_features; >>> + u64 driver_features; >>> + u32 device_id; >>> + u32 vendor_id; >>> + u32 generation; >>> + u32 config_size; >>> + void *config; >>> + u8 status; >>> + u32 vq_num; >>> + u32 vq_align; >>> +}; >>> + >>> +struct vduse_dev_msg { >>> + struct vduse_dev_request req; >>> + struct vduse_dev_response resp; >>> + struct list_head list; >>> + wait_queue_head_t waitq; >>> + bool completed; >>> +}; >>> + >>> +struct vduse_control { >>> + u64 api_version; >>> +}; >>> + >>> +static DEFINE_MUTEX(vduse_lock); >>> +static DEFINE_IDR(vduse_idr); >>> + >>> +static dev_t vduse_major; >>> +static struct class *vduse_class; >>> +static struct cdev vduse_ctrl_cdev; >>> +static struct cdev vduse_cdev; >>> +static struct workqueue_struct *vduse_irq_wq; >>> + >>> +static u32 allowed_device_id[] = { >>> + VIRTIO_ID_BLOCK, >>> +}; >>> + >>> +static inline struct vduse_dev *vdpa_to_vduse(struct vdpa_device *vdpa) >>> +{ >>> + struct vduse_vdpa *vdev = container_of(vdpa, struct vduse_vdpa, vdpa); >>> + >>> + return vdev->dev; >>> +} >>> + >>> +static inline struct vduse_dev *dev_to_vduse(struct device *dev) >>> +{ >>> + struct vdpa_device *vdpa = dev_to_vdpa(dev); >>> + >>> + return vdpa_to_vduse(vdpa); >>> +} >>> + >>> +static struct vduse_dev_msg *vduse_find_msg(struct list_head *head, >>> + uint32_t request_id) >>> +{ >>> + struct vduse_dev_msg *msg; >>> + >>> + list_for_each_entry(msg, head, list) { >>> + if (msg->req.request_id == request_id) { >>> + list_del(&msg->list); >>> + return msg; >>> + } >>> + } >>> + >>> + return NULL; >>> +} >>> + >>> +static struct vduse_dev_msg *vduse_dequeue_msg(struct list_head *head) >>> +{ >>> + struct vduse_dev_msg *msg = NULL; >>> + >>> + if (!list_empty(head)) { >>> + msg = list_first_entry(head, struct vduse_dev_msg, list); >>> + list_del(&msg->list); >>> + } >>> + >>> + return msg; >>> +} >>> + >>> +static void vduse_enqueue_msg(struct list_head *head, >>> + struct vduse_dev_msg *msg) >>> +{ >>> + list_add_tail(&msg->list, head); >>> +} >>> + >>> +static int vduse_dev_msg_sync(struct vduse_dev *dev, >>> + struct vduse_dev_msg *msg) >>> +{ >>> + int ret; >>> + >>> + init_waitqueue_head(&msg->waitq); >>> + spin_lock(&dev->msg_lock); >>> + msg->req.request_id = dev->msg_unique++; >>> + vduse_enqueue_msg(&dev->send_list, msg); >>> + wake_up(&dev->waitq); >>> + spin_unlock(&dev->msg_lock); >>> + >>> + wait_event_killable_timeout(msg->waitq, msg->completed, >>> + VDUSE_REQUEST_TIMEOUT * HZ); >>> + spin_lock(&dev->msg_lock); >>> + if (!msg->completed) { >>> + list_del(&msg->list); >>> + msg->resp.result = VDUSE_REQ_RESULT_FAILED; >>> + } >>> + ret = (msg->resp.result == VDUSE_REQ_RESULT_OK) ? 0 : -EIO; >> >> I think we should mark the device as malfunction when there is a timeout >> and forbid any userspace operations except for the destroy aftwards for >> safety. >> > Is it necessary? Actually we can't stop the dataplane processing in > userspace. That's fine since for virtio-vdpa, we use bounce buffer, for vhost, it works like the device can be reset. > It doesn’t seem to help much if we only forbid ioctl and > read/write. And there might be some messages that can tolerate > failure. I think at least we should figure out fatal failures and forbid any future userspace operations. > Even userspace can do something like NEEDS_RESET to do > recovery. Yes, but that's not someting we can depend. > >>> + spin_unlock(&dev->msg_lock); >>> + >>> + return ret; >>> +} >>> + >>> +static int vduse_dev_get_vq_state_packed(struct vduse_dev *dev, >>> + struct vduse_virtqueue *vq, >>> + struct vdpa_vq_state_packed *packed) >>> +{ >>> + struct vduse_dev_msg msg = { 0 }; >>> + int ret; >>> + >>> + msg.req.type = VDUSE_GET_VQ_STATE; >>> + msg.req.vq_state.index = vq->index; >>> + >>> + ret = vduse_dev_msg_sync(dev, &msg); >>> + if (ret) >>> + return ret; >>> + >>> + packed->last_avail_counter = >>> + msg.resp.vq_state.packed.last_avail_counter; >>> + packed->last_avail_idx = msg.resp.vq_state.packed.last_avail_idx; >>> + packed->last_used_counter = msg.resp.vq_state.packed.last_used_counter; >>> + packed->last_used_idx = msg.resp.vq_state.packed.last_used_idx; >>> + >>> + return 0; >>> +} >>> + >>> +static int vduse_dev_get_vq_state_split(struct vduse_dev *dev, >>> + struct vduse_virtqueue *vq, >>> + struct vdpa_vq_state_split *split) >>> +{ >>> + struct vduse_dev_msg msg = { 0 }; >>> + int ret; >>> + >>> + msg.req.type = VDUSE_GET_VQ_STATE; >>> + msg.req.vq_state.index = vq->index; >>> + >>> + ret = vduse_dev_msg_sync(dev, &msg); >>> + if (ret) >>> + return ret; >>> + >>> + split->avail_index = msg.resp.vq_state.split.avail_index; >>> + >>> + return 0; >>> +} >>> + >>> +static int vduse_dev_set_status(struct vduse_dev *dev, u8 status) >>> +{ >>> + struct vduse_dev_msg msg = { 0 }; >>> + >>> + msg.req.type = VDUSE_SET_STATUS; >>> + msg.req.s.status = status; >>> + >>> + return vduse_dev_msg_sync(dev, &msg); >>> +} >>> + >>> +static int vduse_dev_update_iotlb(struct vduse_dev *dev, >>> + u64 start, u64 last) >>> +{ >>> + struct vduse_dev_msg msg = { 0 }; >>> + >>> + if (last < start) >>> + return -EINVAL; >>> + >>> + msg.req.type = VDUSE_UPDATE_IOTLB; >>> + msg.req.iova.start = start; >>> + msg.req.iova.last = last; >>> + >>> + return vduse_dev_msg_sync(dev, &msg); >>> +} >>> + >>> +static ssize_t vduse_dev_read_iter(struct kiocb *iocb, struct iov_iter *to) >>> +{ >>> + struct file *file = iocb->ki_filp; >>> + struct vduse_dev *dev = file->private_data; >>> + struct vduse_dev_msg *msg; >>> + int size = sizeof(struct vduse_dev_request); >>> + ssize_t ret; >>> + >>> + if (iov_iter_count(to) < size) >>> + return -EINVAL; >>> + >>> + spin_lock(&dev->msg_lock); >>> + while (1) { >>> + msg = vduse_dequeue_msg(&dev->send_list); >>> + if (msg) >>> + break; >>> + >>> + ret = -EAGAIN; >>> + if (file->f_flags & O_NONBLOCK) >>> + goto unlock; >>> + >>> + spin_unlock(&dev->msg_lock); >>> + ret = wait_event_interruptible_exclusive(dev->waitq, >>> + !list_empty(&dev->send_list)); >>> + if (ret) >>> + return ret; >>> + >>> + spin_lock(&dev->msg_lock); >>> + } >>> + spin_unlock(&dev->msg_lock); >>> + ret = copy_to_iter(&msg->req, size, to); >>> + spin_lock(&dev->msg_lock); >>> + if (ret != size) { >>> + ret = -EFAULT; >>> + vduse_enqueue_msg(&dev->send_list, msg); >>> + goto unlock; >>> + } >>> + vduse_enqueue_msg(&dev->recv_list, msg); >>> +unlock: >>> + spin_unlock(&dev->msg_lock); >>> + >>> + return ret; >>> +} >>> + >>> +static ssize_t vduse_dev_write_iter(struct kiocb *iocb, struct iov_iter *from) >>> +{ >>> + struct file *file = iocb->ki_filp; >>> + struct vduse_dev *dev = file->private_data; >>> + struct vduse_dev_response resp; >>> + struct vduse_dev_msg *msg; >>> + size_t ret; >>> + >>> + ret = copy_from_iter(&resp, sizeof(resp), from); >>> + if (ret != sizeof(resp)) >>> + return -EINVAL; >>> + >>> + spin_lock(&dev->msg_lock); >>> + msg = vduse_find_msg(&dev->recv_list, resp.request_id); >>> + if (!msg) { >>> + ret = -ENOENT; >>> + goto unlock; >>> + } >>> + >>> + memcpy(&msg->resp, &resp, sizeof(resp)); >>> + msg->completed = 1; >>> + wake_up(&msg->waitq); >>> +unlock: >>> + spin_unlock(&dev->msg_lock); >>> + >>> + return ret; >>> +} >>> + >>> +static __poll_t vduse_dev_poll(struct file *file, poll_table *wait) >>> +{ >>> + struct vduse_dev *dev = file->private_data; >>> + __poll_t mask = 0; >>> + >>> + poll_wait(file, &dev->waitq, wait); >>> + >>> + if (!list_empty(&dev->send_list)) >>> + mask |= EPOLLIN | EPOLLRDNORM; >>> + if (!list_empty(&dev->recv_list)) >>> + mask |= EPOLLOUT | EPOLLWRNORM; >>> + >>> + return mask; >>> +} >>> + >>> +static void vduse_dev_reset(struct vduse_dev *dev) >>> +{ >>> + int i; >>> + struct vduse_iova_domain *domain = dev->domain; >>> + >>> + /* The coherent mappings are handled in vduse_dev_free_coherent() */ >>> + if (domain->bounce_map) >>> + vduse_domain_reset_bounce_map(domain); >>> + >>> + dev->driver_features = 0; >>> + dev->generation++; >>> + spin_lock(&dev->irq_lock); >>> + dev->config_cb.callback = NULL; >>> + dev->config_cb.private = NULL; >>> + spin_unlock(&dev->irq_lock); >>> + flush_work(&dev->inject); >>> + >>> + for (i = 0; i < dev->vq_num; i++) { >>> + struct vduse_virtqueue *vq = &dev->vqs[i]; >>> + >>> + vq->ready = false; >>> + vq->desc_addr = 0; >>> + vq->driver_addr = 0; >>> + vq->device_addr = 0; >>> + vq->num = 0; >>> + memset(&vq->state, 0, sizeof(vq->state)); >>> + >>> + spin_lock(&vq->kick_lock); >>> + vq->kicked = false; >>> + if (vq->kickfd) >>> + eventfd_ctx_put(vq->kickfd); >>> + vq->kickfd = NULL; >>> + spin_unlock(&vq->kick_lock); >>> + >>> + spin_lock(&vq->irq_lock); >>> + vq->cb.callback = NULL; >>> + vq->cb.private = NULL; >>> + spin_unlock(&vq->irq_lock); >>> + flush_work(&vq->inject); >>> + } >>> +} >>> + >>> +static int vduse_vdpa_set_vq_address(struct vdpa_device *vdpa, u16 idx, >>> + u64 desc_area, u64 driver_area, >>> + u64 device_area) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + struct vduse_virtqueue *vq = &dev->vqs[idx]; >>> + >>> + vq->desc_addr = desc_area; >>> + vq->driver_addr = driver_area; >>> + vq->device_addr = device_area; >>> + >>> + return 0; >>> +} >>> + >>> +static void vduse_vdpa_kick_vq(struct vdpa_device *vdpa, u16 idx) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + struct vduse_virtqueue *vq = &dev->vqs[idx]; >>> + >>> + spin_lock(&vq->kick_lock); >>> + if (!vq->ready) >>> + goto unlock; >>> + >>> + if (vq->kickfd) >>> + eventfd_signal(vq->kickfd, 1); >>> + else >>> + vq->kicked = true; >>> +unlock: >>> + spin_unlock(&vq->kick_lock); >>> +} >>> + >>> +static void vduse_vdpa_set_vq_cb(struct vdpa_device *vdpa, u16 idx, >>> + struct vdpa_callback *cb) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + struct vduse_virtqueue *vq = &dev->vqs[idx]; >>> + >>> + spin_lock(&vq->irq_lock); >>> + vq->cb.callback = cb->callback; >>> + vq->cb.private = cb->private; >>> + spin_unlock(&vq->irq_lock); >>> +} >>> + >>> +static void vduse_vdpa_set_vq_num(struct vdpa_device *vdpa, u16 idx, u32 num) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + struct vduse_virtqueue *vq = &dev->vqs[idx]; >>> + >>> + vq->num = num; >>> +} >>> + >>> +static void vduse_vdpa_set_vq_ready(struct vdpa_device *vdpa, >>> + u16 idx, bool ready) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + struct vduse_virtqueue *vq = &dev->vqs[idx]; >>> + >>> + vq->ready = ready; >>> +} >>> + >>> +static bool vduse_vdpa_get_vq_ready(struct vdpa_device *vdpa, u16 idx) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + struct vduse_virtqueue *vq = &dev->vqs[idx]; >>> + >>> + return vq->ready; >>> +} >>> + >>> +static int vduse_vdpa_set_vq_state(struct vdpa_device *vdpa, u16 idx, >>> + const struct vdpa_vq_state *state) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + struct vduse_virtqueue *vq = &dev->vqs[idx]; >>> + >>> + if (dev->driver_features & BIT_ULL(VIRTIO_F_RING_PACKED)) { >>> + vq->state.packed.last_avail_counter = >>> + state->packed.last_avail_counter; >>> + vq->state.packed.last_avail_idx = state->packed.last_avail_idx; >>> + vq->state.packed.last_used_counter = >>> + state->packed.last_used_counter; >>> + vq->state.packed.last_used_idx = state->packed.last_used_idx; >>> + } else >>> + vq->state.split.avail_index = state->split.avail_index; >>> + >>> + return 0; >>> +} >>> + >>> +static int vduse_vdpa_get_vq_state(struct vdpa_device *vdpa, u16 idx, >>> + struct vdpa_vq_state *state) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + struct vduse_virtqueue *vq = &dev->vqs[idx]; >>> + >>> + if (dev->driver_features & BIT_ULL(VIRTIO_F_RING_PACKED)) >>> + return vduse_dev_get_vq_state_packed(dev, vq, &state->packed); >>> + >>> + return vduse_dev_get_vq_state_split(dev, vq, &state->split); >>> +} >>> + >>> +static u32 vduse_vdpa_get_vq_align(struct vdpa_device *vdpa) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + return dev->vq_align; >>> +} >>> + >>> +static u64 vduse_vdpa_get_features(struct vdpa_device *vdpa) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + return dev->device_features; >>> +} >>> + >>> +static int vduse_vdpa_set_features(struct vdpa_device *vdpa, u64 features) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + dev->driver_features = features; >>> + return 0; >>> +} >>> + >>> +static void vduse_vdpa_set_config_cb(struct vdpa_device *vdpa, >>> + struct vdpa_callback *cb) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + spin_lock(&dev->irq_lock); >>> + dev->config_cb.callback = cb->callback; >>> + dev->config_cb.private = cb->private; >>> + spin_unlock(&dev->irq_lock); >>> +} >>> + >>> +static u16 vduse_vdpa_get_vq_num_max(struct vdpa_device *vdpa, u16 idx) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + return dev->vqs[idx].num_max; >>> +} >>> + >>> +static u32 vduse_vdpa_get_device_id(struct vdpa_device *vdpa) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + return dev->device_id; >>> +} >>> + >>> +static u32 vduse_vdpa_get_vendor_id(struct vdpa_device *vdpa) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + return dev->vendor_id; >>> +} >>> + >>> +static u8 vduse_vdpa_get_status(struct vdpa_device *vdpa) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + return dev->status; >>> +} >>> + >>> +static void vduse_vdpa_set_status(struct vdpa_device *vdpa, u8 status) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + if (vduse_dev_set_status(dev, status)) >>> + return; >>> + >>> + dev->status = status; >> >> It looks to me such design exclude the possibility of letting userspace >> device to set bit like (NEEDS_RESET) in the future. >> > Looks like we can achieve that via the ioctl. Which ioctl can be used for this? > >> I wonder if it's better to do something similar to ccw: >> >> 1) requires the userspace to update the status bit in the response >> 2) update the dev->status to the status in the response if no timeout >> >> Then userspace could then set NEEDS_RESET if necessary. >> > But NEEDS_RESET does not only happen in this case. Yes, so you need an ioctl for userspace to update the device status (NEEDS_RESET) probably. > >>> + if (status == 0) >>> + vduse_dev_reset(dev); >>> +} >>> + >>> +static size_t vduse_vdpa_get_config_size(struct vdpa_device *vdpa) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + return dev->config_size; >>> +} >>> + >>> +static void vduse_vdpa_get_config(struct vdpa_device *vdpa, unsigned int offset, >>> + void *buf, unsigned int len) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + if (len > dev->config_size - offset) >>> + return; >>> + >>> + memcpy(buf, dev->config + offset, len); >>> +} >>> + >>> +static void vduse_vdpa_set_config(struct vdpa_device *vdpa, unsigned int offset, >>> + const void *buf, unsigned int len) >>> +{ >>> + /* Now we only support read-only configuration space */ >>> +} >>> + >>> +static u32 vduse_vdpa_get_generation(struct vdpa_device *vdpa) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + return dev->generation; >>> +} >>> + >>> +static int vduse_vdpa_set_map(struct vdpa_device *vdpa, >>> + struct vhost_iotlb *iotlb) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + int ret; >>> + >>> + ret = vduse_domain_set_map(dev->domain, iotlb); >>> + if (ret) >>> + return ret; >>> + >>> + ret = vduse_dev_update_iotlb(dev, 0ULL, ULLONG_MAX); >>> + if (ret) { >>> + vduse_domain_clear_map(dev->domain, iotlb); >>> + return ret; >>> + } >>> + >>> + return 0; >>> +} >>> + >>> +static void vduse_vdpa_free(struct vdpa_device *vdpa) >>> +{ >>> + struct vduse_dev *dev = vdpa_to_vduse(vdpa); >>> + >>> + dev->vdev = NULL; >>> +} >>> + >>> +static const struct vdpa_config_ops vduse_vdpa_config_ops = { >>> + .set_vq_address = vduse_vdpa_set_vq_address, >>> + .kick_vq = vduse_vdpa_kick_vq, >>> + .set_vq_cb = vduse_vdpa_set_vq_cb, >>> + .set_vq_num = vduse_vdpa_set_vq_num, >>> + .set_vq_ready = vduse_vdpa_set_vq_ready, >>> + .get_vq_ready = vduse_vdpa_get_vq_ready, >>> + .set_vq_state = vduse_vdpa_set_vq_state, >>> + .get_vq_state = vduse_vdpa_get_vq_state, >>> + .get_vq_align = vduse_vdpa_get_vq_align, >>> + .get_features = vduse_vdpa_get_features, >>> + .set_features = vduse_vdpa_set_features, >>> + .set_config_cb = vduse_vdpa_set_config_cb, >>> + .get_vq_num_max = vduse_vdpa_get_vq_num_max, >>> + .get_device_id = vduse_vdpa_get_device_id, >>> + .get_vendor_id = vduse_vdpa_get_vendor_id, >>> + .get_status = vduse_vdpa_get_status, >>> + .set_status = vduse_vdpa_set_status, >>> + .get_config_size = vduse_vdpa_get_config_size, >>> + .get_config = vduse_vdpa_get_config, >>> + .set_config = vduse_vdpa_set_config, >>> + .get_generation = vduse_vdpa_get_generation, >>> + .set_map = vduse_vdpa_set_map, >>> + .free = vduse_vdpa_free, >>> +}; >>> + >>> +static dma_addr_t vduse_dev_map_page(struct device *dev, struct page *page, >>> + unsigned long offset, size_t size, >>> + enum dma_data_direction dir, >>> + unsigned long attrs) >>> +{ >>> + struct vduse_dev *vdev = dev_to_vduse(dev); >>> + struct vduse_iova_domain *domain = vdev->domain; >>> + >>> + return vduse_domain_map_page(domain, page, offset, size, dir, attrs); >>> +} >>> + >>> +static void vduse_dev_unmap_page(struct device *dev, dma_addr_t dma_addr, >>> + size_t size, enum dma_data_direction dir, >>> + unsigned long attrs) >>> +{ >>> + struct vduse_dev *vdev = dev_to_vduse(dev); >>> + struct vduse_iova_domain *domain = vdev->domain; >>> + >>> + return vduse_domain_unmap_page(domain, dma_addr, size, dir, attrs); >>> +} >>> + >>> +static void *vduse_dev_alloc_coherent(struct device *dev, size_t size, >>> + dma_addr_t *dma_addr, gfp_t flag, >>> + unsigned long attrs) >>> +{ >>> + struct vduse_dev *vdev = dev_to_vduse(dev); >>> + struct vduse_iova_domain *domain = vdev->domain; >>> + unsigned long iova; >>> + void *addr; >>> + >>> + *dma_addr = DMA_MAPPING_ERROR; >>> + addr = vduse_domain_alloc_coherent(domain, size, >>> + (dma_addr_t *)&iova, flag, attrs); >>> + if (!addr) >>> + return NULL; >>> + >>> + *dma_addr = (dma_addr_t)iova; >>> + >>> + return addr; >>> +} >>> + >>> +static void vduse_dev_free_coherent(struct device *dev, size_t size, >>> + void *vaddr, dma_addr_t dma_addr, >>> + unsigned long attrs) >>> +{ >>> + struct vduse_dev *vdev = dev_to_vduse(dev); >>> + struct vduse_iova_domain *domain = vdev->domain; >>> + >>> + vduse_domain_free_coherent(domain, size, vaddr, dma_addr, attrs); >>> +} >>> + >>> +static size_t vduse_dev_max_mapping_size(struct device *dev) >>> +{ >>> + struct vduse_dev *vdev = dev_to_vduse(dev); >>> + struct vduse_iova_domain *domain = vdev->domain; >>> + >>> + return domain->bounce_size; >>> +} >>> + >>> +static const struct dma_map_ops vduse_dev_dma_ops = { >>> + .map_page = vduse_dev_map_page, >>> + .unmap_page = vduse_dev_unmap_page, >>> + .alloc = vduse_dev_alloc_coherent, >>> + .free = vduse_dev_free_coherent, >>> + .max_mapping_size = vduse_dev_max_mapping_size, >>> +}; >>> + >>> +static unsigned int perm_to_file_flags(u8 perm) >>> +{ >>> + unsigned int flags = 0; >>> + >>> + switch (perm) { >>> + case VDUSE_ACCESS_WO: >>> + flags |= O_WRONLY; >>> + break; >>> + case VDUSE_ACCESS_RO: >>> + flags |= O_RDONLY; >>> + break; >>> + case VDUSE_ACCESS_RW: >>> + flags |= O_RDWR; >>> + break; >>> + default: >>> + WARN(1, "invalidate vhost IOTLB permission\n"); >>> + break; >>> + } >>> + >>> + return flags; >>> +} >>> + >>> +static int vduse_kickfd_setup(struct vduse_dev *dev, >>> + struct vduse_vq_eventfd *eventfd) >>> +{ >>> + struct eventfd_ctx *ctx = NULL; >>> + struct vduse_virtqueue *vq; >>> + u32 index; >>> + >>> + if (eventfd->index >= dev->vq_num) >>> + return -EINVAL; >>> + >>> + index = array_index_nospec(eventfd->index, dev->vq_num); >>> + vq = &dev->vqs[index]; >>> + if (eventfd->fd >= 0) { >>> + ctx = eventfd_ctx_fdget(eventfd->fd); >>> + if (IS_ERR(ctx)) >>> + return PTR_ERR(ctx); >>> + } else if (eventfd->fd != VDUSE_EVENTFD_DEASSIGN) >>> + return 0; >>> + >>> + spin_lock(&vq->kick_lock); >>> + if (vq->kickfd) >>> + eventfd_ctx_put(vq->kickfd); >>> + vq->kickfd = ctx; >>> + if (vq->ready && vq->kicked && vq->kickfd) { >>> + eventfd_signal(vq->kickfd, 1); >>> + vq->kicked = false; >>> + } >>> + spin_unlock(&vq->kick_lock); >>> + >>> + return 0; >>> +} >>> + >>> +static bool vduse_dev_is_ready(struct vduse_dev *dev) >>> +{ >>> + int i; >>> + >>> + for (i = 0; i < dev->vq_num; i++) >>> + if (!dev->vqs[i].num_max) >>> + return false; >>> + >>> + return true; >>> +} >>> + >>> +static void vduse_dev_irq_inject(struct work_struct *work) >>> +{ >>> + struct vduse_dev *dev = container_of(work, struct vduse_dev, inject); >>> + >>> + spin_lock_irq(&dev->irq_lock); >>> + if (dev->config_cb.callback) >>> + dev->config_cb.callback(dev->config_cb.private); >>> + spin_unlock_irq(&dev->irq_lock); >>> +} >>> + >>> +static void vduse_vq_irq_inject(struct work_struct *work) >>> +{ >>> + struct vduse_virtqueue *vq = container_of(work, >>> + struct vduse_virtqueue, inject); >>> + >>> + spin_lock_irq(&vq->irq_lock); >>> + if (vq->ready && vq->cb.callback) >>> + vq->cb.callback(vq->cb.private); >>> + spin_unlock_irq(&vq->irq_lock); >>> +} >>> + >>> +static long vduse_dev_ioctl(struct file *file, unsigned int cmd, >>> + unsigned long arg) >>> +{ >>> + struct vduse_dev *dev = file->private_data; >>> + void __user *argp = (void __user *)arg; >>> + int ret; >>> + >>> + switch (cmd) { >>> + case VDUSE_IOTLB_GET_FD: { >>> + struct vduse_iotlb_entry entry; >>> + struct vhost_iotlb_map *map; >>> + struct vdpa_map_file *map_file; >>> + struct vduse_iova_domain *domain = dev->domain; >>> + struct file *f = NULL; >>> + >>> + ret = -EFAULT; >>> + if (copy_from_user(&entry, argp, sizeof(entry))) >>> + break; >>> + >>> + ret = -EINVAL; >>> + if (entry.start > entry.last) >>> + break; >>> + >>> + spin_lock(&domain->iotlb_lock); >>> + map = vhost_iotlb_itree_first(domain->iotlb, >>> + entry.start, entry.last); >>> + if (map) { >>> + map_file = (struct vdpa_map_file *)map->opaque; >>> + f = get_file(map_file->file); >>> + entry.offset = map_file->offset; >>> + entry.start = map->start; >>> + entry.last = map->last; >>> + entry.perm = map->perm; >>> + } >>> + spin_unlock(&domain->iotlb_lock); >>> + ret = -EINVAL; >>> + if (!f) >>> + break; >>> + >>> + ret = -EFAULT; >>> + if (copy_to_user(argp, &entry, sizeof(entry))) { >>> + fput(f); >>> + break; >>> + } >>> + ret = receive_fd(f, perm_to_file_flags(entry.perm)); >>> + fput(f); >>> + break; >>> + } >>> + case VDUSE_DEV_GET_FEATURES: >> >> Let's add a comment to explain here. E.g we just mirror what driver >> wrote and the drier is expected to check FEATURE_OK. >> > I already document some details in include/uapi/linux/vduse.h and > Documentation/userspace-api/vduse.rst The point is the behavior of "mirroring" is one of the choice: " The device MUST present any valid feature bits the driver has written indriver_feature, starting at bitdriver_feature_select∗32 for anydriver_feature_selectwritten by the driver. Valid feature bits are those which are subset of the correspondingdevice_featurebits. The device MAY present invalid bits written by the driver.Note:This means that a device can ignore writes for feature bits it never offers, and simply present 0 on reads. Or it can just mirror what the driver wrote (but it will still have to check them when the driver sets FEATURES_OK).Note:A driver shouldn’t write invalid bits anyway, as per3.1.1 <https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-930001>, but this attempts to handle it. " It would be helpful to add a comment here or people may check whether it follows the spec. > >>> + ret = put_user(dev->driver_features, (u64 __user *)argp); >>> + break; >>> + case VDUSE_DEV_SET_CONFIG: { >>> + struct vduse_config_data config; >>> + unsigned long size = offsetof(struct vduse_config_data, >>> + buffer); >>> + >>> + ret = -EFAULT; >>> + if (copy_from_user(&config, argp, size)) >>> + break; >>> + >>> + ret = -EINVAL; >>> + if (config.length == 0 || >>> + config.length > dev->config_size - config.offset) >>> + break; >>> + >>> + ret = -EFAULT; >>> + if (copy_from_user(dev->config + config.offset, argp + size, >>> + config.length)) >>> + break; >>> + >>> + ret = 0; >>> + break; >>> + } >>> + case VDUSE_DEV_INJECT_IRQ: >> >> It's better to have "config" in the name. >> > OK. > >>> + ret = 0; >>> + queue_work(vduse_irq_wq, &dev->inject); >>> + break; >>> + case VDUSE_VQ_SETUP: { >>> + struct vduse_vq_config config; >>> + u32 index; >>> + >>> + ret = -EFAULT; >>> + if (copy_from_user(&config, argp, sizeof(config))) >>> + break; >>> + >>> + ret = -EINVAL; >>> + if (config.index >= dev->vq_num) >>> + break; >>> + >>> + index = array_index_nospec(config.index, dev->vq_num); >>> + dev->vqs[index].num_max = config.max_size; >>> + ret = 0; >>> + break; >>> + } >>> + case VDUSE_VQ_GET_INFO: { >>> + struct vduse_vq_info vq_info; >>> + struct vduse_virtqueue *vq; >>> + u32 index; >>> + >>> + ret = -EFAULT; >>> + if (copy_from_user(&vq_info, argp, sizeof(vq_info))) >>> + break; >>> + >>> + ret = -EINVAL; >>> + if (vq_info.index >= dev->vq_num) >>> + break; >>> + >>> + index = array_index_nospec(vq_info.index, dev->vq_num); >>> + vq = &dev->vqs[index]; >>> + vq_info.desc_addr = vq->desc_addr; >>> + vq_info.driver_addr = vq->driver_addr; >>> + vq_info.device_addr = vq->device_addr; >>> + vq_info.num = vq->num; >>> + >>> + if (dev->driver_features & BIT_ULL(VIRTIO_F_RING_PACKED)) { >>> + vq_info.packed.last_avail_counter = >>> + vq->state.packed.last_avail_counter; >>> + vq_info.packed.last_avail_idx = >>> + vq->state.packed.last_avail_idx; >>> + vq_info.packed.last_used_counter = >>> + vq->state.packed.last_used_counter; >>> + vq_info.packed.last_used_idx = >>> + vq->state.packed.last_used_idx; >>> + } else >>> + vq_info.split.avail_index = >>> + vq->state.split.avail_index; >>> + >>> + vq_info.ready = vq->ready; >>> + >>> + ret = -EFAULT; >>> + if (copy_to_user(argp, &vq_info, sizeof(vq_info))) >>> + break; >>> + >>> + ret = 0; >>> + break; >>> + } >>> + case VDUSE_VQ_SETUP_KICKFD: { >>> + struct vduse_vq_eventfd eventfd; >>> + >>> + ret = -EFAULT; >>> + if (copy_from_user(&eventfd, argp, sizeof(eventfd))) >>> + break; >>> + >>> + ret = vduse_kickfd_setup(dev, &eventfd); >>> + break; >>> + } >>> + case VDUSE_VQ_INJECT_IRQ: { >>> + u32 index; >>> + >>> + ret = -EFAULT; >>> + if (get_user(index, (u32 __user *)argp)) >>> + break; >>> + >>> + ret = -EINVAL; >>> + if (index >= dev->vq_num) >>> + break; >>> + >>> + ret = 0; >>> + index = array_index_nospec(index, dev->vq_num); >>> + queue_work(vduse_irq_wq, &dev->vqs[index].inject); >>> + break; >>> + } >>> + default: >>> + ret = -ENOIOCTLCMD; >>> + break; >>> + } >>> + >>> + return ret; >>> +} >>> + >>> +static int vduse_dev_release(struct inode *inode, struct file *file) >>> +{ >>> + struct vduse_dev *dev = file->private_data; >>> + >>> + spin_lock(&dev->msg_lock); >>> + /* Make sure the inflight messages can processed after reconncection */ >>> + list_splice_init(&dev->recv_list, &dev->send_list); >>> + spin_unlock(&dev->msg_lock); >>> + dev->connected = false; >>> + >>> + return 0; >>> +} >>> + >>> +static struct vduse_dev *vduse_dev_get_from_minor(int minor) >>> +{ >>> + struct vduse_dev *dev; >>> + >>> + mutex_lock(&vduse_lock); >>> + dev = idr_find(&vduse_idr, minor); >>> + mutex_unlock(&vduse_lock); >>> + >>> + return dev; >>> +} >>> + >>> +static int vduse_dev_open(struct inode *inode, struct file *file) >>> +{ >>> + int ret; >>> + struct vduse_dev *dev = vduse_dev_get_from_minor(iminor(inode)); >>> + >>> + if (!dev) >>> + return -ENODEV; >>> + >>> + ret = -EBUSY; >>> + mutex_lock(&dev->lock); >>> + if (dev->connected) >>> + goto unlock; >>> + >>> + ret = 0; >>> + dev->connected = true; >>> + file->private_data = dev; >>> +unlock: >>> + mutex_unlock(&dev->lock); >>> + >>> + return ret; >>> +} >>> + >>> +static const struct file_operations vduse_dev_fops = { >>> + .owner = THIS_MODULE, >>> + .open = vduse_dev_open, >>> + .release = vduse_dev_release, >>> + .read_iter = vduse_dev_read_iter, >>> + .write_iter = vduse_dev_write_iter, >>> + .poll = vduse_dev_poll, >>> + .unlocked_ioctl = vduse_dev_ioctl, >>> + .compat_ioctl = compat_ptr_ioctl, >>> + .llseek = noop_llseek, >>> +}; >>> + >>> +static struct vduse_dev *vduse_dev_create(void) >>> +{ >>> + struct vduse_dev *dev = kzalloc(sizeof(*dev), GFP_KERNEL); >>> + >>> + if (!dev) >>> + return NULL; >>> + >>> + mutex_init(&dev->lock); >>> + spin_lock_init(&dev->msg_lock); >>> + INIT_LIST_HEAD(&dev->send_list); >>> + INIT_LIST_HEAD(&dev->recv_list); >>> + spin_lock_init(&dev->irq_lock); >>> + >>> + INIT_WORK(&dev->inject, vduse_dev_irq_inject); >>> + init_waitqueue_head(&dev->waitq); >>> + >>> + return dev; >>> +} >>> + >>> +static void vduse_dev_destroy(struct vduse_dev *dev) >>> +{ >>> + kfree(dev); >>> +} >>> + >>> +static struct vduse_dev *vduse_find_dev(const char *name) >>> +{ >>> + struct vduse_dev *dev; >>> + int id; >>> + >>> + idr_for_each_entry(&vduse_idr, dev, id) >>> + if (!strcmp(dev->name, name)) >>> + return dev; >>> + >>> + return NULL; >>> +} >>> + >>> +static int vduse_destroy_dev(char *name) >>> +{ >>> + struct vduse_dev *dev = vduse_find_dev(name); >>> + >>> + if (!dev) >>> + return -EINVAL; >>> + >>> + mutex_lock(&dev->lock); >>> + if (dev->vdev || dev->connected) { >>> + mutex_unlock(&dev->lock); >>> + return -EBUSY; >>> + } >>> + dev->connected = true; >>> + mutex_unlock(&dev->lock); >>> + >>> + vduse_dev_reset(dev); >>> + device_destroy(vduse_class, MKDEV(MAJOR(vduse_major), dev->minor)); >>> + idr_remove(&vduse_idr, dev->minor); >>> + kvfree(dev->config); >>> + kfree(dev->vqs); >>> + vduse_domain_destroy(dev->domain); >>> + kfree(dev->name); >>> + vduse_dev_destroy(dev); >>> + module_put(THIS_MODULE); >>> + >>> + return 0; >>> +} >>> + >>> +static bool device_is_allowed(u32 device_id) >>> +{ >>> + int i; >>> + >>> + for (i = 0; i < ARRAY_SIZE(allowed_device_id); i++) >>> + if (allowed_device_id[i] == device_id) >>> + return true; >>> + >>> + return false; >>> +} >>> + >>> +static bool features_is_valid(u64 features) >>> +{ >>> + if (!(features & (1ULL << VIRTIO_F_ACCESS_PLATFORM))) >>> + return false; >>> + >>> + /* Now we only support read-only configuration space */ >>> + if (features & (1ULL << VIRTIO_BLK_F_CONFIG_WCE)) >>> + return false; >>> + >>> + return true; >>> +} >>> + >>> +static bool vduse_validate_config(struct vduse_dev_config *config) >>> +{ >>> + if (config->bounce_size > VDUSE_MAX_BOUNCE_SIZE) >>> + return false; >>> + >>> + if (config->vq_align > PAGE_SIZE) >>> + return false; >>> + >>> + if (config->config_size > PAGE_SIZE) >>> + return false; >> >> Any reason for this check? >> > To limit the kernel buffer size for configuration space. Is the > PAGE_SIZE enough? Should be sufficient for now. Thanks > > Thanks, > Yongji > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <CACycT3vTyR=+6xOJyXCu_bGAKcz4Fx3bA25WfdBjhxJ6MOvLzw@mail.gmail.com>]
* Re: [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace [not found] ` <CACycT3vTyR=+6xOJyXCu_bGAKcz4Fx3bA25WfdBjhxJ6MOvLzw@mail.gmail.com> @ 2021-07-15 5:00 ` Jason Wang 0 siblings, 0 replies; 18+ messages in thread From: Jason Wang @ 2021-07-15 5:00 UTC (permalink / raw) To: Yongji Xie Cc: kvm, Michael S. Tsirkin, virtualization, Christian Brauner, Jonathan Corbet, joro, Matthew Wilcox, Christoph Hellwig, Dan Carpenter, Liu Xiaodong, Al Viro, Stefan Hajnoczi, songmuchun, Jens Axboe, He Zhe, Greg KH, Randy Dunlap, linux-kernel, iommu, bcrl, netdev, linux-fsdevel, Mika Penttilä 在 2021/7/15 下午12:03, Yongji Xie 写道: >> Which ioctl can be used for this? >> > I mean we can introduce a new ioctl for that in the future. Ok, I see. > >>>> I wonder if it's better to do something similar to ccw: >>>> >>>> 1) requires the userspace to update the status bit in the response >>>> 2) update the dev->status to the status in the response if no timeout >>>> >>>> Then userspace could then set NEEDS_RESET if necessary. >>>> >>> But NEEDS_RESET does not only happen in this case. >> Yes, so you need an ioctl for userspace to update the device status >> (NEEDS_RESET) probably. >> > Yes, but I'd like to do that after the initial version is merged since > NEEDS_RESET is not supported now. Right. Thanks > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <20210713084656.232-18-xieyongji@bytedance.com>]
* Re: [PATCH v9 17/17] Documentation: Add documentation for VDUSE [not found] ` <20210713084656.232-18-xieyongji@bytedance.com> @ 2021-07-15 5:18 ` Jason Wang 0 siblings, 0 replies; 18+ messages in thread From: Jason Wang @ 2021-07-15 5:18 UTC (permalink / raw) To: Xie Yongji, mst, stefanha, sgarzare, parav, hch, christian.brauner, rdunlap, willy, viro, axboe, bcrl, corbet, mika.penttila, dan.carpenter, joro, gregkh, zhe.he, xiaodong.liu Cc: kvm, netdev, linux-kernel, virtualization, iommu, songmuchun, linux-fsdevel 在 2021/7/13 下午4:46, Xie Yongji 写道: > VDUSE (vDPA Device in Userspace) is a framework to support > implementing software-emulated vDPA devices in userspace. This > document is intended to clarify the VDUSE design and usage. > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > --- > Documentation/userspace-api/index.rst | 1 + > Documentation/userspace-api/vduse.rst | 248 ++++++++++++++++++++++++++++++++++ > 2 files changed, 249 insertions(+) > create mode 100644 Documentation/userspace-api/vduse.rst > > diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst > index 0b5eefed027e..c432be070f67 100644 > --- a/Documentation/userspace-api/index.rst > +++ b/Documentation/userspace-api/index.rst > @@ -27,6 +27,7 @@ place where this information is gathered. > iommu > media/index > sysfs-platform_profile > + vduse > > .. only:: subproject and html > > diff --git a/Documentation/userspace-api/vduse.rst b/Documentation/userspace-api/vduse.rst > new file mode 100644 > index 000000000000..2c0d56d4b2da > --- /dev/null > +++ b/Documentation/userspace-api/vduse.rst > @@ -0,0 +1,248 @@ > +================================== > +VDUSE - "vDPA Device in Userspace" > +================================== > + > +vDPA (virtio data path acceleration) device is a device that uses a > +datapath which complies with the virtio specifications with vendor > +specific control path. vDPA devices can be both physically located on > +the hardware or emulated by software. VDUSE is a framework that makes it > +possible to implement software-emulated vDPA devices in userspace. And > +to make the device emulation more secure, the emulated vDPA device's > +control path is handled in the kernel and only the data path is > +implemented in the userspace. > + > +Note that only virtio block device is supported by VDUSE framework now, > +which can reduce security risks when the userspace process that implements > +the data path is run by an unprivileged user. The support for other device > +types can be added after the security issue of corresponding device driver > +is clarified or fixed in the future. > + > +Start/Stop VDUSE devices > +------------------------ > + > +VDUSE devices are started as follows: Not native speaker but "created" is probably better. > + > +1. Create a new VDUSE instance with ioctl(VDUSE_CREATE_DEV) on > + /dev/vduse/control. > + > +2. Setup each virtqueue with ioctl(VDUSE_VQ_SETUP) on /dev/vduse/$NAME. > + > +3. Begin processing VDUSE messages from /dev/vduse/$NAME. The first > + messages will arrive while attaching the VDUSE instance to vDPA bus. > + > +4. Send the VDPA_CMD_DEV_NEW netlink message to attach the VDUSE > + instance to vDPA bus. I think 4 should be done before 3? > + > +VDUSE devices are stopped as follows: "removed" or "destroyed" is better than "stopped" here. > + > +1. Send the VDPA_CMD_DEV_DEL netlink message to detach the VDUSE > + instance from vDPA bus. > + > +2. Close the file descriptor referring to /dev/vduse/$NAME. > + > +3. Destroy the VDUSE instance with ioctl(VDUSE_DESTROY_DEV) on > + /dev/vduse/control. > + > +The netlink messages can be sent via vdpa tool in iproute2 or use the > +below sample codes: > + > +.. code-block:: c > + > + static int netlink_add_vduse(const char *name, enum vdpa_command cmd) > + { > + struct nl_sock *nlsock; > + struct nl_msg *msg; > + int famid; > + > + nlsock = nl_socket_alloc(); > + if (!nlsock) > + return -ENOMEM; > + > + if (genl_connect(nlsock)) > + goto free_sock; > + > + famid = genl_ctrl_resolve(nlsock, VDPA_GENL_NAME); > + if (famid < 0) > + goto close_sock; > + > + msg = nlmsg_alloc(); > + if (!msg) > + goto close_sock; > + > + if (!genlmsg_put(msg, NL_AUTO_PORT, NL_AUTO_SEQ, famid, 0, 0, cmd, 0)) > + goto nla_put_failure; > + > + NLA_PUT_STRING(msg, VDPA_ATTR_DEV_NAME, name); > + if (cmd == VDPA_CMD_DEV_NEW) > + NLA_PUT_STRING(msg, VDPA_ATTR_MGMTDEV_DEV_NAME, "vduse"); > + > + if (nl_send_sync(nlsock, msg)) > + goto close_sock; > + > + nl_close(nlsock); > + nl_socket_free(nlsock); > + > + return 0; > + nla_put_failure: > + nlmsg_free(msg); > + close_sock: > + nl_close(nlsock); > + free_sock: > + nl_socket_free(nlsock); > + return -1; > + } > + > +How VDUSE works > +--------------- > + > +As mentioned above, a VDUSE device is created by ioctl(VDUSE_CREATE_DEV) on > +/dev/vduse/control. With this ioctl, userspace can specify some basic configuration > +such as device name (uniquely identify a VDUSE device), virtio features, virtio > +configuration space, bounce buffer size This bounce buffer size looks questionable. We'd better not expose any implementation details to userspace. I think we can simply start with a module parameter for VDUSE? > and so on for this emulated device. Then > +a char device interface (/dev/vduse/$NAME) is exported to userspace for device > +emulation. Userspace can use the VDUSE_VQ_SETUP ioctl on /dev/vduse/$NAME to > +add per-virtqueue configuration such as the max size of virtqueue to the device. > + > +After the initialization, the VDUSE device can be attached to vDPA bus via > +the VDPA_CMD_DEV_NEW netlink message. Userspace needs to read()/write() on > +/dev/vduse/$NAME to receive/reply some control messages from/to VDUSE kernel > +module as follows: > + > +.. code-block:: c > + > + static int vduse_message_handler(int dev_fd) > + { > + int len; > + struct vduse_dev_request req; > + struct vduse_dev_response resp; > + > + len = read(dev_fd, &req, sizeof(req)); > + if (len != sizeof(req)) > + return -1; > + > + resp.request_id = req.request_id; > + > + switch (req.type) { > + > + /* handle different types of message */ "messages"? > + > + } > + > + len = write(dev_fd, &resp, sizeof(resp)); > + if (len != sizeof(resp)) > + return -1; > + > + return 0; > + } > + > +There are now three types of messages introduced by VDUSE framework: > + > +- VDUSE_GET_VQ_STATE: Get the state for virtqueue, userspace should return > + avail index for split virtqueue or the device/driver ring wrap counters and > + the avail and used index for packed virtqueue. > + > +- VDUSE_SET_STATUS: Set the device status, userspace should follow > + the virtio spec: https://docs.oasis-open.org/virtio/virtio/v1.1/virtio-v1.1.html > + to process this message. For example, fail to set the FEATURES_OK device > + status bit if the device can not accept the negotiated virtio features > + get from the VDUSE_GET_FEATURES ioctl. > + > +- VDUSE_UPDATE_IOTLB: Notify userspace to update the memory mapping for specified > + IOVA range, userspace should firstly remove the old mapping, then setup the new > + mapping via the VDUSE_IOTLB_GET_FD ioctl. > + > +After DRIVER_OK status bit is set via the VDUSE_SET_STATUS message, userspace is > +able to start the dataplane processing with the help of below ioctls: > + > +- VDUSE_IOTLB_GET_FD: Find the first IOVA region that overlaps with the specified > + range [start, last] and return the corresponding file descriptor. In vhost-vdpa > + cases, it might be a full chunk of guest RAM. And in virtio-vdpa cases, it should > + be the whole bounce buffer or the memory region that stores one virtqueue's > + metadata (descriptor table, available ring and used ring). I think we can simply remove the driver specific sentences. And just say to use map the pages to the IOVA. > Userspace can access > + this IOVA region by passing fd and corresponding size, offset, perm to mmap(). > + For example: > + > +.. code-block:: c > + > + static int perm_to_prot(uint8_t perm) > + { > + int prot = 0; > + > + switch (perm) { > + case VDUSE_ACCESS_WO: > + prot |= PROT_WRITE; > + break; > + case VDUSE_ACCESS_RO: > + prot |= PROT_READ; > + break; > + case VDUSE_ACCESS_RW: > + prot |= PROT_READ | PROT_WRITE; > + break; > + } > + > + return prot; > + } > + > + static void *iova_to_va(int dev_fd, uint64_t iova, uint64_t *len) > + { > + int fd; > + void *addr; > + size_t size; > + struct vduse_iotlb_entry entry; > + > + entry.start = iova; > + entry.last = iova; > + fd = ioctl(dev_fd, VDUSE_IOTLB_GET_FD, &entry); > + if (fd < 0) > + return NULL; > + > + size = entry.last - entry.start + 1; > + *len = entry.last - iova + 1; > + addr = mmap(0, size, perm_to_prot(entry.perm), MAP_SHARED, > + fd, entry.offset); > + close(fd); > + if (addr == MAP_FAILED) > + return NULL; > + > + /* > + * Using some data structures such as linked list to store > + * the iotlb mapping. The munmap(2) should be called for the > + * cached mapping when the corresponding VDUSE_UPDATE_IOTLB > + * message is received or the device is reset. > + */ > + > + return addr + iova - entry.start; > + } > + > +- VDUSE_VQ_GET_INFO: Get the specified virtqueue's information including the size, > + the IOVAs of descriptor table, available ring and used ring, the state > + and the ready status. Maybe it's better just show the vduse_vq_info here, or both. (maybe we can do the same for the rest of ioctls). > The IOVAs should be passed to the VDUSE_IOTLB_GET_FD ioctl > + so that userspace can access the descriptor table, available ring and used ring. > + > +- VDUSE_VQ_SETUP_KICKFD: Setup the kick eventfd for the specified virtqueues. > + The kick eventfd is used by VDUSE kernel module to notify userspace to consume > + the available ring. > + > +- VDUSE_INJECT_VQ_IRQ: Inject an interrupt for specific virtqueue. It's used to > + notify virtio driver to consume the used ring. The config interrupt injection is missed. > + > +More details on the uAPI can be found in include/uapi/linux/vduse.h. > + > +MMU-based IOMMU Driver > +---------------------- > + It's kind of software IOTLB actually. Maybe we can call that "MMU-based software IOTLB" > +VDUSE framework implements an MMU-based on-chip IOMMU driver to support > +mapping the kernel DMA buffer into the userspace IOVA region dynamically. > +This is mainly designed for virtio-vdpa case (kernel virtio drivers). > + > +The basic idea behind this driver is treating MMU (VA->PA) as IOMMU (IOVA->PA). > +The driver will set up MMU mapping instead of IOMMU mapping for the DMA transfer > +so that the userspace process is able to use its virtual address to access > +the DMA buffer in kernel. > + > +And to avoid security issue, a bounce-buffering mechanism is introduced to > +prevent userspace accessing the original buffer directly which may contain other > +kernel data. I wonder if it's worth to describe the method we used for guarding against malicious userspace device. Thanks > During the mapping, unmapping, the driver will copy the data from > +the original buffer to the bounce buffer and back, depending on the direction of > +the transfer. And the bounce-buffer addresses will be mapped into the user address > +space instead of the original one. _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2021-07-15 5:19 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20210713084656.232-1-xieyongji@bytedance.com>
[not found] ` <20210713084656.232-8-xieyongji@bytedance.com>
2021-07-13 11:02 ` [PATCH v9 07/17] virtio: Don't set FAILED status bit on device index allocation failure Dan Carpenter
[not found] ` <20210713084656.232-14-xieyongji@bytedance.com>
2021-07-13 11:31 ` [PATCH v9 13/17] vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap() Dan Carpenter
2021-07-14 2:14 ` Jason Wang
2021-07-14 8:05 ` Dan Carpenter
2021-07-14 9:41 ` Jason Wang
2021-07-14 9:57 ` Dan Carpenter
2021-07-15 2:20 ` Jason Wang
[not found] ` <20210713084656.232-4-xieyongji@bytedance.com>
2021-07-14 4:20 ` [PATCH v9 03/17] vdpa: Fix code indentation Joe Perches
[not found] ` <20210713084656.232-17-xieyongji@bytedance.com>
2021-07-13 13:27 ` [PATCH v9 16/17] vduse: Introduce VDUSE - vDPA Device in Userspace Dan Carpenter
2021-07-14 2:54 ` Jason Wang
2021-07-14 5:45 ` Jason Wang
2021-07-14 5:54 ` Michael S. Tsirkin
2021-07-14 6:02 ` Jason Wang
2021-07-14 6:47 ` Greg KH
2021-07-14 8:56 ` Jason Wang
[not found] ` <CACycT3uh+wUeDM1H7JiCJTMeCVCBngURGKeXD-h+meekNNwiQw@mail.gmail.com>
2021-07-14 9:12 ` Jason Wang
[not found] ` <CACycT3vTyR=+6xOJyXCu_bGAKcz4Fx3bA25WfdBjhxJ6MOvLzw@mail.gmail.com>
2021-07-15 5:00 ` Jason Wang
[not found] ` <20210713084656.232-18-xieyongji@bytedance.com>
2021-07-15 5:18 ` [PATCH v9 17/17] Documentation: Add documentation for VDUSE Jason Wang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).