public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [[RESEND] iommufd PATCH v2 0/2] Make mdev driver dma_unmap callback tolerant to unmaps come before device open
@ 2022-11-29 10:58 Yi Liu
  2022-11-29 10:58 ` [[RESEND] iommufd PATCH v2 1/2] i915/gvt: Move gvt mapping cache initialization to intel_vgpu_init_dev() Yi Liu
  2022-11-29 10:58 ` [[RESEND] iommufd PATCH v2 2/2] vfio/ap: validate iova during dma_unmap and trigger irq disable Yi Liu
  0 siblings, 2 replies; 8+ messages in thread
From: Yi Liu @ 2022-11-29 10:58 UTC (permalink / raw)
  To: jgg
  Cc: alex.williamson, kevin.tian, kvm, mjrosato, chao.p.peng, yi.l.liu,
	yi.y.sun, intel-gvt-dev, linux-s390, Tony Krowiak, Halil Pasic,
	Jason Herne, Zhenyu Wang, Zhi Wang

Jason's "Connect VFIO to IOMMUFD" introduces vfio iommufd compat mode. Under
this mode, vfio_iommufd_bind() creates an access which has an unmap callback,
which can be called immediately. This means mdev drivers may receive unmap
requests before the mdev is opened. For now, there are only three drivers
(gvt, vfio-ap and vfio-ccw) providing dma_unmap(). vfio-ccw is fine with
such requests. While gvt-g and vfio-ap may have potential problem with such
requests due to internal implementation. This series tries to enhance the two
drivers.

This series is based on Jason's below branch.

https://github.com/jgunthorpe/linux/tree/iommufd

(commit: 41973418f6c8c241ed5647d1408d5b917f24dfd8)

Change:
v2:
 - Refine the cover letter and commit message of patch 0001 (Kevin)
 - Rename patch 0001 to better fit the commit message
 - Add r-b from Zhi for patch 0001
 - Tweak iova range test to assume page-aligned for patch 0002 (Jason)
 - Remove break so all queues within range are removed for patch 0002 (Kevin)

v1: https://lore.kernel.org/kvm/20221123134832.429589-1-yi.l.liu@intel.com/

Cc: Tony Krowiak <akrowiak@linux.ibm.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Jason Herne <jjherne@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Zhi Wang <zhi.a.wang@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: intel-gvt-dev@lists.freedesktop.org

Regards,
	Yi Liu

Matthew Rosato (1):
  vfio/ap: validate iova during dma_unmap and trigger irq disable

Yi Liu (1):
  i915/gvt: Move gvt mapping cache initialization to
    intel_vgpu_init_dev()

 drivers/gpu/drm/i915/gvt/kvmgt.c  | 13 +++++++++----
 drivers/s390/crypto/vfio_ap_ops.c | 18 +++++++++++++++++-
 2 files changed, 26 insertions(+), 5 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [[RESEND] iommufd PATCH v2 1/2] i915/gvt: Move gvt mapping cache initialization to intel_vgpu_init_dev()
  2022-11-29 10:58 [[RESEND] iommufd PATCH v2 0/2] Make mdev driver dma_unmap callback tolerant to unmaps come before device open Yi Liu
@ 2022-11-29 10:58 ` Yi Liu
  2022-12-01  3:25   ` Zhenyu Wang
  2022-11-29 10:58 ` [[RESEND] iommufd PATCH v2 2/2] vfio/ap: validate iova during dma_unmap and trigger irq disable Yi Liu
  1 sibling, 1 reply; 8+ messages in thread
From: Yi Liu @ 2022-11-29 10:58 UTC (permalink / raw)
  To: jgg
  Cc: alex.williamson, kevin.tian, kvm, mjrosato, chao.p.peng, yi.l.liu,
	yi.y.sun, intel-gvt-dev, linux-s390, Zhenyu Wang, Zhi Wang

vfio container registers .dma_unmap() callback after the device is opened.
So it's fine for mdev drivers to initialize internal mapping cache in
.open_device(). See vfio_device_container_register().

Now with iommufd an access ops with an unmap callback is registered
when the device is bound to iommufd which is before .open_device()
is called. This implies gvt's .dma_unmap() could be called before its
internal mapping cache is initialized.

The fix is moving gvt mapping cache initialization to vGPU init. While
at it also move ptable initialization together.

Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Zhi Wang <zhi.a.wang@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: intel-gvt-dev@lists.freedesktop.org
Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 7a45e5360caf..f563e5dbe66f 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 
 	vgpu->attached = true;
 
-	kvmgt_protect_table_init(vgpu);
-	gvt_cache_init(vgpu);
-
 	vgpu->track_node.track_write = kvmgt_page_track_write;
 	vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot;
 	kvm_page_track_register_notifier(vgpu->vfio_device.kvm,
@@ -1451,9 +1448,17 @@ static int intel_vgpu_init_dev(struct vfio_device *vfio_dev)
 	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
 	struct intel_vgpu_type *type =
 		container_of(mdev->type, struct intel_vgpu_type, type);
+	int ret;
 
 	vgpu->gvt = kdev_to_i915(mdev->type->parent->dev)->gvt;
-	return intel_gvt_create_vgpu(vgpu, type->conf);
+	ret = intel_gvt_create_vgpu(vgpu, type->conf);
+	if (ret)
+		return ret;
+
+	kvmgt_protect_table_init(vgpu);
+	gvt_cache_init(vgpu);
+
+	return 0;
 }
 
 static void intel_vgpu_release_dev(struct vfio_device *vfio_dev)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [[RESEND] iommufd PATCH v2 2/2] vfio/ap: validate iova during dma_unmap and trigger irq disable
  2022-11-29 10:58 [[RESEND] iommufd PATCH v2 0/2] Make mdev driver dma_unmap callback tolerant to unmaps come before device open Yi Liu
  2022-11-29 10:58 ` [[RESEND] iommufd PATCH v2 1/2] i915/gvt: Move gvt mapping cache initialization to intel_vgpu_init_dev() Yi Liu
@ 2022-11-29 10:58 ` Yi Liu
  2022-11-29 22:02   ` Anthony Krowiak
  1 sibling, 1 reply; 8+ messages in thread
From: Yi Liu @ 2022-11-29 10:58 UTC (permalink / raw)
  To: jgg
  Cc: alex.williamson, kevin.tian, kvm, mjrosato, chao.p.peng, yi.l.liu,
	yi.y.sun, intel-gvt-dev, linux-s390, Tony Krowiak, Halil Pasic,
	Jason Herne

From: Matthew Rosato <mjrosato@linux.ibm.com>

Currently, each mapped iova is stashed in its associated vfio_ap_queue;
when we get an unmap request, validate that it matches with one or more
of these stashed values before attempting unpins.

Each stashed iova represents IRQ that was enabled for a queue.  Therefore,
if a match is found, trigger IRQ disable for this queue to ensure that
underlying firmware will no longer try to use the associated pfn after
the page is unpinned. IRQ disable will also handle the associated unpin.

Cc: Tony Krowiak <akrowiak@linux.ibm.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Jason Herne <jjherne@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/s390/crypto/vfio_ap_ops.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index 0b4cc8c597ae..8bf353d46820 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -1535,13 +1535,29 @@ static int vfio_ap_mdev_set_kvm(struct ap_matrix_mdev *matrix_mdev,
 	return 0;
 }
 
+static void unmap_iova(struct ap_matrix_mdev *matrix_mdev, u64 iova, u64 length)
+{
+	struct ap_queue_table *qtable = &matrix_mdev->qtable;
+	struct vfio_ap_queue *q;
+	int loop_cursor;
+
+	hash_for_each(qtable->queues, loop_cursor, q, mdev_qnode) {
+		if (q->saved_iova >= iova && q->saved_iova < iova + length)
+			vfio_ap_irq_disable(q);
+	}
+}
+
 static void vfio_ap_mdev_dma_unmap(struct vfio_device *vdev, u64 iova,
 				   u64 length)
 {
 	struct ap_matrix_mdev *matrix_mdev =
 		container_of(vdev, struct ap_matrix_mdev, vdev);
 
-	vfio_unpin_pages(&matrix_mdev->vdev, iova, 1);
+	mutex_lock(&matrix_dev->mdevs_lock);
+
+	unmap_iova(matrix_mdev, iova, length);
+
+	mutex_unlock(&matrix_dev->mdevs_lock);
 }
 
 /**
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [[RESEND] iommufd PATCH v2 2/2] vfio/ap: validate iova during dma_unmap and trigger irq disable
  2022-11-29 10:58 ` [[RESEND] iommufd PATCH v2 2/2] vfio/ap: validate iova during dma_unmap and trigger irq disable Yi Liu
@ 2022-11-29 22:02   ` Anthony Krowiak
  0 siblings, 0 replies; 8+ messages in thread
From: Anthony Krowiak @ 2022-11-29 22:02 UTC (permalink / raw)
  To: Yi Liu, jgg
  Cc: alex.williamson, kevin.tian, kvm, mjrosato, chao.p.peng, yi.y.sun,
	intel-gvt-dev, linux-s390, Halil Pasic, Jason Herne

LGTM:

Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>

On 11/29/22 5:58 AM, Yi Liu wrote:
> From: Matthew Rosato <mjrosato@linux.ibm.com>
>
> Currently, each mapped iova is stashed in its associated vfio_ap_queue;
> when we get an unmap request, validate that it matches with one or more
> of these stashed values before attempting unpins.
>
> Each stashed iova represents IRQ that was enabled for a queue.  Therefore,
> if a match is found, trigger IRQ disable for this queue to ensure that
> underlying firmware will no longer try to use the associated pfn after
> the page is unpinned. IRQ disable will also handle the associated unpin.
>
> Cc: Tony Krowiak <akrowiak@linux.ibm.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Jason Herne <jjherne@linux.ibm.com>
> Cc: linux-s390@vger.kernel.org
> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> ---
>   drivers/s390/crypto/vfio_ap_ops.c | 18 +++++++++++++++++-
>   1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 0b4cc8c597ae..8bf353d46820 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1535,13 +1535,29 @@ static int vfio_ap_mdev_set_kvm(struct ap_matrix_mdev *matrix_mdev,
>   	return 0;
>   }
>   
> +static void unmap_iova(struct ap_matrix_mdev *matrix_mdev, u64 iova, u64 length)
> +{
> +	struct ap_queue_table *qtable = &matrix_mdev->qtable;
> +	struct vfio_ap_queue *q;
> +	int loop_cursor;
> +
> +	hash_for_each(qtable->queues, loop_cursor, q, mdev_qnode) {
> +		if (q->saved_iova >= iova && q->saved_iova < iova + length)
> +			vfio_ap_irq_disable(q);
> +	}
> +}
> +
>   static void vfio_ap_mdev_dma_unmap(struct vfio_device *vdev, u64 iova,
>   				   u64 length)
>   {
>   	struct ap_matrix_mdev *matrix_mdev =
>   		container_of(vdev, struct ap_matrix_mdev, vdev);
>   
> -	vfio_unpin_pages(&matrix_mdev->vdev, iova, 1);
> +	mutex_lock(&matrix_dev->mdevs_lock);
> +
> +	unmap_iova(matrix_mdev, iova, length);
> +
> +	mutex_unlock(&matrix_dev->mdevs_lock);
>   }
>   
>   /**

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [[RESEND] iommufd PATCH v2 1/2] i915/gvt: Move gvt mapping cache initialization to intel_vgpu_init_dev()
  2022-11-29 10:58 ` [[RESEND] iommufd PATCH v2 1/2] i915/gvt: Move gvt mapping cache initialization to intel_vgpu_init_dev() Yi Liu
@ 2022-12-01  3:25   ` Zhenyu Wang
  2022-12-01  4:18     ` Yi Liu
  0 siblings, 1 reply; 8+ messages in thread
From: Zhenyu Wang @ 2022-12-01  3:25 UTC (permalink / raw)
  To: Yi Liu
  Cc: jgg, alex.williamson, kevin.tian, kvm, mjrosato, chao.p.peng,
	yi.y.sun, intel-gvt-dev, linux-s390, Zhenyu Wang, Zhi Wang

[-- Attachment #1: Type: text/plain, Size: 2445 bytes --]

On 2022.11.29 02:58:30 -0800, Yi Liu wrote:
> vfio container registers .dma_unmap() callback after the device is opened.
> So it's fine for mdev drivers to initialize internal mapping cache in
> .open_device(). See vfio_device_container_register().
> 
> Now with iommufd an access ops with an unmap callback is registered
> when the device is bound to iommufd which is before .open_device()
> is called. This implies gvt's .dma_unmap() could be called before its
> internal mapping cache is initialized.
> 
> The fix is moving gvt mapping cache initialization to vGPU init. While
> at it also move ptable initialization together.
> 
> Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
> Cc: Zhi Wang <zhi.a.wang@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: intel-gvt-dev@lists.freedesktop.org
> Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> ---
>  drivers/gpu/drm/i915/gvt/kvmgt.c | 13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 7a45e5360caf..f563e5dbe66f 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
>  
>  	vgpu->attached = true;
>  
> -	kvmgt_protect_table_init(vgpu);
> -	gvt_cache_init(vgpu);
> -
>  	vgpu->track_node.track_write = kvmgt_page_track_write;
>  	vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot;
>  	kvm_page_track_register_notifier(vgpu->vfio_device.kvm,
> @@ -1451,9 +1448,17 @@ static int intel_vgpu_init_dev(struct vfio_device *vfio_dev)
>  	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
>  	struct intel_vgpu_type *type =
>  		container_of(mdev->type, struct intel_vgpu_type, type);
> +	int ret;
>  
>  	vgpu->gvt = kdev_to_i915(mdev->type->parent->dev)->gvt;
> -	return intel_gvt_create_vgpu(vgpu, type->conf);
> +	ret = intel_gvt_create_vgpu(vgpu, type->conf);
> +	if (ret)
> +		return ret;
> +
> +	kvmgt_protect_table_init(vgpu);
> +	gvt_cache_init(vgpu);
> +
> +	return 0;

I'm fine with this change, but could we add some sanity check at close
time to ensure we clean up any internal cache? Btw, do we need to reset
rbtree root pointer?

>  }
>  
>  static void intel_vgpu_release_dev(struct vfio_device *vfio_dev)
> -- 
> 2.34.1
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [[RESEND] iommufd PATCH v2 1/2] i915/gvt: Move gvt mapping cache initialization to intel_vgpu_init_dev()
  2022-12-01  3:25   ` Zhenyu Wang
@ 2022-12-01  4:18     ` Yi Liu
  2022-12-01  7:21       ` Zhenyu Wang
  0 siblings, 1 reply; 8+ messages in thread
From: Yi Liu @ 2022-12-01  4:18 UTC (permalink / raw)
  To: Zhenyu Wang
  Cc: jgg, alex.williamson, kevin.tian, kvm, mjrosato, chao.p.peng,
	yi.y.sun, intel-gvt-dev, linux-s390, Zhi Wang

On 2022/12/1 11:25, Zhenyu Wang wrote:
> On 2022.11.29 02:58:30 -0800, Yi Liu wrote:
>> vfio container registers .dma_unmap() callback after the device is opened.
>> So it's fine for mdev drivers to initialize internal mapping cache in
>> .open_device(). See vfio_device_container_register().
>>
>> Now with iommufd an access ops with an unmap callback is registered
>> when the device is bound to iommufd which is before .open_device()
>> is called. This implies gvt's .dma_unmap() could be called before its
>> internal mapping cache is initialized.
>>
>> The fix is moving gvt mapping cache initialization to vGPU init. While
>> at it also move ptable initialization together.
>>
>> Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
>> Cc: Zhi Wang <zhi.a.wang@intel.com>
>> Cc: Kevin Tian <kevin.tian@intel.com>
>> Cc: intel-gvt-dev@lists.freedesktop.org
>> Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gvt/kvmgt.c | 13 +++++++++----
>>   1 file changed, 9 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
>> index 7a45e5360caf..f563e5dbe66f 100644
>> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
>> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
>> @@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
>>   
>>   	vgpu->attached = true;
>>   
>> -	kvmgt_protect_table_init(vgpu);
>> -	gvt_cache_init(vgpu);
>> -
>>   	vgpu->track_node.track_write = kvmgt_page_track_write;
>>   	vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot;
>>   	kvm_page_track_register_notifier(vgpu->vfio_device.kvm,
>> @@ -1451,9 +1448,17 @@ static int intel_vgpu_init_dev(struct vfio_device *vfio_dev)
>>   	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
>>   	struct intel_vgpu_type *type =
>>   		container_of(mdev->type, struct intel_vgpu_type, type);
>> +	int ret;
>>   
>>   	vgpu->gvt = kdev_to_i915(mdev->type->parent->dev)->gvt;
>> -	return intel_gvt_create_vgpu(vgpu, type->conf);
>> +	ret = intel_gvt_create_vgpu(vgpu, type->conf);
>> +	if (ret)
>> +		return ret;
>> +
>> +	kvmgt_protect_table_init(vgpu);
>> +	gvt_cache_init(vgpu);
>> +
>> +	return 0;
> 
> I'm fine with this change, but could we add some sanity check at close
> time to ensure we clean up any internal cache? Btw, do we need to reset
> rbtree root pointer?

I noticed there is gvt_cache_destroy() in intel_vgpu_close_device(). This
cleans up the internal cache. So even the rbtree root is valid, it is an
empty per close_device(). isn't it?

>>   }
>>   
>>   static void intel_vgpu_release_dev(struct vfio_device *vfio_dev)
>> -- 
>> 2.34.1
>>

-- 
Regards,
Yi Liu

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [[RESEND] iommufd PATCH v2 1/2] i915/gvt: Move gvt mapping cache initialization to intel_vgpu_init_dev()
  2022-12-01  4:18     ` Yi Liu
@ 2022-12-01  7:21       ` Zhenyu Wang
  2022-12-01  7:51         ` Yi Liu
  0 siblings, 1 reply; 8+ messages in thread
From: Zhenyu Wang @ 2022-12-01  7:21 UTC (permalink / raw)
  To: Yi Liu
  Cc: jgg, alex.williamson, kevin.tian, kvm, mjrosato, chao.p.peng,
	yi.y.sun, intel-gvt-dev, linux-s390, Zhi Wang

[-- Attachment #1: Type: text/plain, Size: 3162 bytes --]

On 2022.12.01 12:18:29 +0800, Yi Liu wrote:
> On 2022/12/1 11:25, Zhenyu Wang wrote:
> > On 2022.11.29 02:58:30 -0800, Yi Liu wrote:
> > > vfio container registers .dma_unmap() callback after the device is opened.
> > > So it's fine for mdev drivers to initialize internal mapping cache in
> > > .open_device(). See vfio_device_container_register().
> > > 
> > > Now with iommufd an access ops with an unmap callback is registered
> > > when the device is bound to iommufd which is before .open_device()
> > > is called. This implies gvt's .dma_unmap() could be called before its
> > > internal mapping cache is initialized.
> > > 
> > > The fix is moving gvt mapping cache initialization to vGPU init. While
> > > at it also move ptable initialization together.
> > > 
> > > Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
> > > Cc: Zhi Wang <zhi.a.wang@intel.com>
> > > Cc: Kevin Tian <kevin.tian@intel.com>
> > > Cc: intel-gvt-dev@lists.freedesktop.org
> > > Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
> > > Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> > > ---
> > >   drivers/gpu/drm/i915/gvt/kvmgt.c | 13 +++++++++----
> > >   1 file changed, 9 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > index 7a45e5360caf..f563e5dbe66f 100644
> > > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > @@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
> > >   	vgpu->attached = true;
> > > -	kvmgt_protect_table_init(vgpu);
> > > -	gvt_cache_init(vgpu);
> > > -
> > >   	vgpu->track_node.track_write = kvmgt_page_track_write;
> > >   	vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot;
> > >   	kvm_page_track_register_notifier(vgpu->vfio_device.kvm,
> > > @@ -1451,9 +1448,17 @@ static int intel_vgpu_init_dev(struct vfio_device *vfio_dev)
> > >   	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
> > >   	struct intel_vgpu_type *type =
> > >   		container_of(mdev->type, struct intel_vgpu_type, type);
> > > +	int ret;
> > >   	vgpu->gvt = kdev_to_i915(mdev->type->parent->dev)->gvt;
> > > -	return intel_gvt_create_vgpu(vgpu, type->conf);
> > > +	ret = intel_gvt_create_vgpu(vgpu, type->conf);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	kvmgt_protect_table_init(vgpu);
> > > +	gvt_cache_init(vgpu);
> > > +
> > > +	return 0;
> > 
> > I'm fine with this change, but could we add some sanity check at close
> > time to ensure we clean up any internal cache? Btw, do we need to reset
> > rbtree root pointer?
> 
> I noticed there is gvt_cache_destroy() in intel_vgpu_close_device(). This
> cleans up the internal cache. So even the rbtree root is valid, it is an
> empty per close_device(). isn't it?
>

I'd like to see an explicit sanity check on vgpu->nr_cache_entries and
reset rb root at close time, which matches current code behavior, but
not need to do re-init.

> > >   }
> > >   static void intel_vgpu_release_dev(struct vfio_device *vfio_dev)
> > > -- 
> > > 2.34.1
> > > 
> 
> -- 
> Regards,
> Yi Liu

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [[RESEND] iommufd PATCH v2 1/2] i915/gvt: Move gvt mapping cache initialization to intel_vgpu_init_dev()
  2022-12-01  7:21       ` Zhenyu Wang
@ 2022-12-01  7:51         ` Yi Liu
  0 siblings, 0 replies; 8+ messages in thread
From: Yi Liu @ 2022-12-01  7:51 UTC (permalink / raw)
  To: Zhenyu Wang
  Cc: jgg, alex.williamson, kevin.tian, kvm, mjrosato, chao.p.peng,
	yi.y.sun, intel-gvt-dev, linux-s390, Zhi Wang

On 2022/12/1 15:21, Zhenyu Wang wrote:
> On 2022.12.01 12:18:29 +0800, Yi Liu wrote:
>> On 2022/12/1 11:25, Zhenyu Wang wrote:
>>> On 2022.11.29 02:58:30 -0800, Yi Liu wrote:
>>>> vfio container registers .dma_unmap() callback after the device is opened.
>>>> So it's fine for mdev drivers to initialize internal mapping cache in
>>>> .open_device(). See vfio_device_container_register().
>>>>
>>>> Now with iommufd an access ops with an unmap callback is registered
>>>> when the device is bound to iommufd which is before .open_device()
>>>> is called. This implies gvt's .dma_unmap() could be called before its
>>>> internal mapping cache is initialized.
>>>>
>>>> The fix is moving gvt mapping cache initialization to vGPU init. While
>>>> at it also move ptable initialization together.
>>>>
>>>> Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
>>>> Cc: Zhi Wang <zhi.a.wang@intel.com>
>>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>>> Cc: intel-gvt-dev@lists.freedesktop.org
>>>> Reviewed-by: Zhi Wang <zhi.a.wang@intel.com>
>>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>>>> ---
>>>>    drivers/gpu/drm/i915/gvt/kvmgt.c | 13 +++++++++----
>>>>    1 file changed, 9 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
>>>> index 7a45e5360caf..f563e5dbe66f 100644
>>>> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
>>>> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
>>>> @@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
>>>>    	vgpu->attached = true;
>>>> -	kvmgt_protect_table_init(vgpu);
>>>> -	gvt_cache_init(vgpu);
>>>> -
>>>>    	vgpu->track_node.track_write = kvmgt_page_track_write;
>>>>    	vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot;
>>>>    	kvm_page_track_register_notifier(vgpu->vfio_device.kvm,
>>>> @@ -1451,9 +1448,17 @@ static int intel_vgpu_init_dev(struct vfio_device *vfio_dev)
>>>>    	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
>>>>    	struct intel_vgpu_type *type =
>>>>    		container_of(mdev->type, struct intel_vgpu_type, type);
>>>> +	int ret;
>>>>    	vgpu->gvt = kdev_to_i915(mdev->type->parent->dev)->gvt;
>>>> -	return intel_gvt_create_vgpu(vgpu, type->conf);
>>>> +	ret = intel_gvt_create_vgpu(vgpu, type->conf);
>>>> +	if (ret)
>>>> +		return ret;
>>>> +
>>>> +	kvmgt_protect_table_init(vgpu);
>>>> +	gvt_cache_init(vgpu);
>>>> +
>>>> +	return 0;
>>>
>>> I'm fine with this change, but could we add some sanity check at close
>>> time to ensure we clean up any internal cache? Btw, do we need to reset
>>> rbtree root pointer?
>>
>> I noticed there is gvt_cache_destroy() in intel_vgpu_close_device(). This
>> cleans up the internal cache. So even the rbtree root is valid, it is an
>> empty per close_device(). isn't it?
>>
> 
> I'd like to see an explicit sanity check on vgpu->nr_cache_entries and
> reset rb root at close time, which matches current code behavior, but
> not need to do re-init.

do you mean check vgpu->nr_cache_entries before calling 
gvt_cache_destroy()? I think it should be possible non-zero, so even 
non-zero is detected, nothing should be done. But if non-zero nr_cache_entries
is detected after gvt_cache_destroy(), this should be a problem as
gvt_cache_destroy() should make nr_cache_entries be zero. Is there any
chance that it is still non-zero after gvt_cache_destroy()?

static void gvt_cache_destroy(struct intel_vgpu *vgpu)
{
         struct gvt_dma *dma;
         struct rb_node *node = NULL;

         for (;;) {
                 mutex_lock(&vgpu->cache_lock);
                 node = rb_first(&vgpu->gfn_cache);
                 if (!node) {
                         mutex_unlock(&vgpu->cache_lock);
                         break;
                 }
                 dma = rb_entry(node, struct gvt_dma, gfn_node);
                 gvt_dma_unmap_page(vgpu, dma->gfn, dma->dma_addr, dma->size);
                 __gvt_cache_remove_entry(vgpu, dma); //decrements 
nr_cache_entries
                 mutex_unlock(&vgpu->cache_lock);
         }
}

-- 
Regards,
Yi Liu

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-12-01  7:51 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-11-29 10:58 [[RESEND] iommufd PATCH v2 0/2] Make mdev driver dma_unmap callback tolerant to unmaps come before device open Yi Liu
2022-11-29 10:58 ` [[RESEND] iommufd PATCH v2 1/2] i915/gvt: Move gvt mapping cache initialization to intel_vgpu_init_dev() Yi Liu
2022-12-01  3:25   ` Zhenyu Wang
2022-12-01  4:18     ` Yi Liu
2022-12-01  7:21       ` Zhenyu Wang
2022-12-01  7:51         ` Yi Liu
2022-11-29 10:58 ` [[RESEND] iommufd PATCH v2 2/2] vfio/ap: validate iova during dma_unmap and trigger irq disable Yi Liu
2022-11-29 22:02   ` Anthony Krowiak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox