dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] drm/amdgpu: Transfer fences to dmabuf importer
@ 2018-08-07 10:45 Chris Wilson
       [not found] ` <20180807104500.31264-1-chris-Y6uKTt2uX1cEflXRtASbqLVCufUGDwFn@public.gmane.org>
  0 siblings, 1 reply; 5+ messages in thread
From: Chris Wilson @ 2018-08-07 10:45 UTC (permalink / raw)
  To: intel-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Christian König,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW, Chris Wilson

amdgpu only uses shared-fences internally, but dmabuf importers rely on
implicit write hazard tracking via the reservation_object.fence_excl.
For example, the importer use the write hazard for timing a page flip to
only occur after the exporter has finished flushing its write into the
surface. As such, on exporting a dmabuf, we must either flush all
outstanding fences (for we do not know which are writes and should have
been exclusive) or alternatively create a new exclusive fence that is
the composite of all the existing shared fences, and so will only be
signaled when all earlier fences are signaled (ensuring that we can not
be signaled before the completion of any earlier write).

Testcase: igt/amd_prime/amd-to-i915
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: "Christian König" <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 70 ++++++++++++++++++++---
 1 file changed, 62 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
index 1c5d97f4b4dd..47e6ec5510b6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
@@ -37,6 +37,7 @@
 #include "amdgpu_display.h"
 #include <drm/amdgpu_drm.h>
 #include <linux/dma-buf.h>
+#include <linux/dma-fence-array.h>
 
 static const struct dma_buf_ops amdgpu_dmabuf_ops;
 
@@ -188,6 +189,57 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
 	return ERR_PTR(ret);
 }
 
+static int
+__reservation_object_make_exclusive(struct reservation_object *obj)
+{
+	struct reservation_object_list *fobj;
+	struct dma_fence_array *array;
+	struct dma_fence **fences;
+	unsigned int count, i;
+
+	fobj = reservation_object_get_list(obj);
+	if (!fobj)
+		return 0;
+
+	count = !!rcu_access_pointer(obj->fence_excl);
+	count += fobj->shared_count;
+
+	fences = kmalloc_array(sizeof(*fences), count, GFP_KERNEL);
+	if (!fences)
+		return -ENOMEM;
+
+	for (i = 0; i < fobj->shared_count; i++) {
+		struct dma_fence *f =
+			rcu_dereference_protected(fobj->shared[i],
+						  reservation_object_held(obj));
+
+		fences[i] = dma_fence_get(f);
+	}
+
+	if (rcu_access_pointer(obj->fence_excl)) {
+		struct dma_fence *f =
+			rcu_dereference_protected(obj->fence_excl,
+						  reservation_object_held(obj));
+
+		fences[i] = dma_fence_get(f);
+	}
+
+	array = dma_fence_array_create(count, fences,
+				       dma_fence_context_alloc(1), 0,
+				       false);
+	if (!array)
+		goto err_fences_put;
+
+	reservation_object_add_excl_fence(obj, &array->base);
+	return 0;
+
+err_fences_put:
+	for (i = 0; i < count; i++)
+		dma_fence_put(fences[i]);
+	kfree(fences);
+	return -ENOMEM;
+}
+
 /**
  * amdgpu_gem_map_attach - &dma_buf_ops.attach implementation
  * @dma_buf: shared DMA buffer
@@ -219,16 +271,18 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
 
 	if (attach->dev->driver != adev->dev->driver) {
 		/*
-		 * Wait for all shared fences to complete before we switch to future
-		 * use of exclusive fence on this prime shared bo.
+		 * We only create shared fences for internal use, but importers
+		 * of the dmabuf rely on exclusive fences for implicitly
+		 * tracking write hazards. As any of the current fences may
+		 * correspond to a write, we need to convert all existing
+		 * fences on the resevation object into a single exclusive
+		 * fence.
 		 */
-		r = reservation_object_wait_timeout_rcu(bo->tbo.resv,
-							true, false,
-							MAX_SCHEDULE_TIMEOUT);
-		if (unlikely(r < 0)) {
-			DRM_DEBUG_PRIME("Fence wait failed: %li\n", r);
+		reservation_object_lock(bo->tbo.resv, NULL);
+		r = __reservation_object_make_exclusive(bo->tbo.resv);
+		reservation_object_unlock(bo->tbo.resv);
+		if (r)
 			goto error_unreserve;
-		}
 	}
 
 	/* pin buffer into GTT */
-- 
2.18.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/amdgpu: Transfer fences to dmabuf importer
       [not found] ` <20180807104500.31264-1-chris-Y6uKTt2uX1cEflXRtASbqLVCufUGDwFn@public.gmane.org>
@ 2018-08-07 10:56   ` Huang Rui
  2018-08-07 11:05     ` Chris Wilson
  0 siblings, 1 reply; 5+ messages in thread
From: Huang Rui @ 2018-08-07 10:56 UTC (permalink / raw)
  To: Chris Wilson
  Cc: Alex Deucher, intel-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW, Christian König,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

On Tue, Aug 07, 2018 at 11:45:00AM +0100, Chris Wilson wrote:
> amdgpu only uses shared-fences internally, but dmabuf importers rely on
> implicit write hazard tracking via the reservation_object.fence_excl.
> For example, the importer use the write hazard for timing a page flip to
> only occur after the exporter has finished flushing its write into the
> surface. As such, on exporting a dmabuf, we must either flush all
> outstanding fences (for we do not know which are writes and should have
> been exclusive) or alternatively create a new exclusive fence that is
> the composite of all the existing shared fences, and so will only be
> signaled when all earlier fences are signaled (ensuring that we can not
> be signaled before the completion of any earlier write).
> 
> Testcase: igt/amd_prime/amd-to-i915
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Alex Deucher <alexander.deucher@amd.com>
> Cc: "Christian König" <christian.koenig@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 70 ++++++++++++++++++++---
>  1 file changed, 62 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> index 1c5d97f4b4dd..47e6ec5510b6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> @@ -37,6 +37,7 @@
>  #include "amdgpu_display.h"
>  #include <drm/amdgpu_drm.h>
>  #include <linux/dma-buf.h>
> +#include <linux/dma-fence-array.h>
>  
>  static const struct dma_buf_ops amdgpu_dmabuf_ops;
>  
> @@ -188,6 +189,57 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
>  	return ERR_PTR(ret);
>  }
>  
> +static int
> +__reservation_object_make_exclusive(struct reservation_object *obj)
> +{

Why not you move the helper to reservation.c, and then export symbol for
this file?

Thanks,
Ray

> +	struct reservation_object_list *fobj;
> +	struct dma_fence_array *array;
> +	struct dma_fence **fences;
> +	unsigned int count, i;
> +
> +	fobj = reservation_object_get_list(obj);
> +	if (!fobj)
> +		return 0;
> +
> +	count = !!rcu_access_pointer(obj->fence_excl);
> +	count += fobj->shared_count;
> +
> +	fences = kmalloc_array(sizeof(*fences), count, GFP_KERNEL);
> +	if (!fences)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < fobj->shared_count; i++) {
> +		struct dma_fence *f =
> +			rcu_dereference_protected(fobj->shared[i],
> +						  reservation_object_held(obj));
> +
> +		fences[i] = dma_fence_get(f);
> +	}
> +
> +	if (rcu_access_pointer(obj->fence_excl)) {
> +		struct dma_fence *f =
> +			rcu_dereference_protected(obj->fence_excl,
> +						  reservation_object_held(obj));
> +
> +		fences[i] = dma_fence_get(f);
> +	}
> +
> +	array = dma_fence_array_create(count, fences,
> +				       dma_fence_context_alloc(1), 0,
> +				       false);
> +	if (!array)
> +		goto err_fences_put;
> +
> +	reservation_object_add_excl_fence(obj, &array->base);
> +	return 0;
> +
> +err_fences_put:
> +	for (i = 0; i < count; i++)
> +		dma_fence_put(fences[i]);
> +	kfree(fences);
> +	return -ENOMEM;
> +}
> +
>  /**
>   * amdgpu_gem_map_attach - &dma_buf_ops.attach implementation
>   * @dma_buf: shared DMA buffer
> @@ -219,16 +271,18 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
>  
>  	if (attach->dev->driver != adev->dev->driver) {
>  		/*
> -		 * Wait for all shared fences to complete before we switch to future
> -		 * use of exclusive fence on this prime shared bo.
> +		 * We only create shared fences for internal use, but importers
> +		 * of the dmabuf rely on exclusive fences for implicitly
> +		 * tracking write hazards. As any of the current fences may
> +		 * correspond to a write, we need to convert all existing
> +		 * fences on the resevation object into a single exclusive
> +		 * fence.
>  		 */
> -		r = reservation_object_wait_timeout_rcu(bo->tbo.resv,
> -							true, false,
> -							MAX_SCHEDULE_TIMEOUT);
> -		if (unlikely(r < 0)) {
> -			DRM_DEBUG_PRIME("Fence wait failed: %li\n", r);
> +		reservation_object_lock(bo->tbo.resv, NULL);
> +		r = __reservation_object_make_exclusive(bo->tbo.resv);
> +		reservation_object_unlock(bo->tbo.resv);
> +		if (r)
>  			goto error_unreserve;
> -		}
>  	}
>  
>  	/* pin buffer into GTT */
> -- 
> 2.18.0
> 
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/amdgpu: Transfer fences to dmabuf importer
  2018-08-07 10:56   ` Huang Rui
@ 2018-08-07 11:05     ` Chris Wilson
  0 siblings, 0 replies; 5+ messages in thread
From: Chris Wilson @ 2018-08-07 11:05 UTC (permalink / raw)
  To: Huang Rui
  Cc: Alex Deucher, intel-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW, Christian König,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Quoting Huang Rui (2018-08-07 11:56:24)
> On Tue, Aug 07, 2018 at 11:45:00AM +0100, Chris Wilson wrote:
> > amdgpu only uses shared-fences internally, but dmabuf importers rely on
> > implicit write hazard tracking via the reservation_object.fence_excl.
> > For example, the importer use the write hazard for timing a page flip to
> > only occur after the exporter has finished flushing its write into the
> > surface. As such, on exporting a dmabuf, we must either flush all
> > outstanding fences (for we do not know which are writes and should have
> > been exclusive) or alternatively create a new exclusive fence that is
> > the composite of all the existing shared fences, and so will only be
> > signaled when all earlier fences are signaled (ensuring that we can not
> > be signaled before the completion of any earlier write).
> > 
> > Testcase: igt/amd_prime/amd-to-i915
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Alex Deucher <alexander.deucher@amd.com>
> > Cc: "Christian König" <christian.koenig@amd.com>
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 70 ++++++++++++++++++++---
> >  1 file changed, 62 insertions(+), 8 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> > index 1c5d97f4b4dd..47e6ec5510b6 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> > @@ -37,6 +37,7 @@
> >  #include "amdgpu_display.h"
> >  #include <drm/amdgpu_drm.h>
> >  #include <linux/dma-buf.h>
> > +#include <linux/dma-fence-array.h>
> >  
> >  static const struct dma_buf_ops amdgpu_dmabuf_ops;
> >  
> > @@ -188,6 +189,57 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
> >       return ERR_PTR(ret);
> >  }
> >  
> > +static int
> > +__reservation_object_make_exclusive(struct reservation_object *obj)
> > +{
> 
> Why not you move the helper to reservation.c, and then export symbol for
> this file?

I have not seen anything else that would wish to use this helper. The
first task is to solve this issue here before worrying about
generalisation.
-Chris
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH] drm/amdgpu: Transfer fences to dmabuf importer
@ 2019-01-30 10:55 Chris Wilson
       [not found] ` <20190130105517.23977-1-chris-Y6uKTt2uX1cEflXRtASbqLVCufUGDwFn@public.gmane.org>
  0 siblings, 1 reply; 5+ messages in thread
From: Chris Wilson @ 2019-01-30 10:55 UTC (permalink / raw)
  To: dri-devel; +Cc: Alex Deucher, intel-gfx, Christian König, amd-gfx

amdgpu only uses shared-fences internally, but dmabuf importers rely on
implicit write hazard tracking via the reservation_object.fence_excl.
For example, the importer use the write hazard for timing a page flip to
only occur after the exporter has finished flushing its write into the
surface. As such, on exporting a dmabuf, we must either flush all
outstanding fences (for we do not know which are writes and should have
been exclusive) or alternatively create a new exclusive fence that is
the composite of all the existing shared fences, and so will only be
signaled when all earlier fences are signaled (ensuring that we can not
be signaled before the completion of any earlier write).

v2: reservation_object is already locked by amdgpu_bo_reserve()
v3: Replace looping with get_fences_rcu and special case the promotion
of a single shared fence directly to an exclusive fence, bypassing the
fence array.
v4: Drop the fence array ref after assigning to reservation_object

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107341
Testcase: igt/amd_prime/amd-to-i915
References: 8e94a46c1770 ("drm/amdgpu: Attach exclusive fence to prime exported bo's. (v5)")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: "Christian König" <christian.koenig@amd.com>
Reviewed-by: "Christian König" <christian.koenig@amd.com>
---
We may disagree on the best long term strategy for fence semantics, but
I think this is still a nice short term solution to the blocking
behaviour on exporting amdgpu to prime.
-Chris
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 59 ++++++++++++++++++++---
 1 file changed, 51 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
index 71913a18d142..a38e0fb4a6fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
@@ -38,6 +38,7 @@
 #include "amdgpu_gem.h"
 #include <drm/amdgpu_drm.h>
 #include <linux/dma-buf.h>
+#include <linux/dma-fence-array.h>
 
 /**
  * amdgpu_gem_prime_get_sg_table - &drm_driver.gem_prime_get_sg_table
@@ -187,6 +188,48 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
 	return ERR_PTR(ret);
 }
 
+static int
+__reservation_object_make_exclusive(struct reservation_object *obj)
+{
+	struct dma_fence **fences;
+	unsigned int count;
+	int r;
+
+	if (!reservation_object_get_list(obj)) /* no shared fences to convert */
+		return 0;
+
+	r = reservation_object_get_fences_rcu(obj, NULL, &count, &fences);
+	if (r)
+		return r;
+
+	if (count == 0) {
+		/* Now that was unexpected. */
+	} else if (count == 1) {
+		reservation_object_add_excl_fence(obj, fences[0]);
+		dma_fence_put(fences[0]);
+		kfree(fences);
+	} else {
+		struct dma_fence_array *array;
+
+		array = dma_fence_array_create(count, fences,
+					       dma_fence_context_alloc(1), 0,
+					       false);
+		if (!array)
+			goto err_fences_put;
+
+		reservation_object_add_excl_fence(obj, &array->base);
+		dma_fence_put(&array->base);
+	}
+
+	return 0;
+
+err_fences_put:
+	while (count--)
+		dma_fence_put(fences[count]);
+	kfree(fences);
+	return -ENOMEM;
+}
+
 /**
  * amdgpu_gem_map_attach - &dma_buf_ops.attach implementation
  * @dma_buf: Shared DMA buffer
@@ -218,16 +261,16 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
 
 	if (attach->dev->driver != adev->dev->driver) {
 		/*
-		 * Wait for all shared fences to complete before we switch to future
-		 * use of exclusive fence on this prime shared bo.
+		 * We only create shared fences for internal use, but importers
+		 * of the dmabuf rely on exclusive fences for implicitly
+		 * tracking write hazards. As any of the current fences may
+		 * correspond to a write, we need to convert all existing
+		 * fences on the reservation object into a single exclusive
+		 * fence.
 		 */
-		r = reservation_object_wait_timeout_rcu(bo->tbo.resv,
-							true, false,
-							MAX_SCHEDULE_TIMEOUT);
-		if (unlikely(r < 0)) {
-			DRM_DEBUG_PRIME("Fence wait failed: %li\n", r);
+		r = __reservation_object_make_exclusive(bo->tbo.resv);
+		if (r)
 			goto error_unreserve;
-		}
 	}
 
 	/* pin buffer into GTT */
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/amdgpu: Transfer fences to dmabuf importer
       [not found] ` <20190130105517.23977-1-chris-Y6uKTt2uX1cEflXRtASbqLVCufUGDwFn@public.gmane.org>
@ 2019-01-30 12:00   ` Christian König
  0 siblings, 0 replies; 5+ messages in thread
From: Christian König @ 2019-01-30 12:00 UTC (permalink / raw)
  To: Chris Wilson, dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, intel-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Christian König, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 30.01.19 um 11:55 schrieb Chris Wilson:
> amdgpu only uses shared-fences internally, but dmabuf importers rely on
> implicit write hazard tracking via the reservation_object.fence_excl.
> For example, the importer use the write hazard for timing a page flip to
> only occur after the exporter has finished flushing its write into the
> surface. As such, on exporting a dmabuf, we must either flush all
> outstanding fences (for we do not know which are writes and should have
> been exclusive) or alternatively create a new exclusive fence that is
> the composite of all the existing shared fences, and so will only be
> signaled when all earlier fences are signaled (ensuring that we can not
> be signaled before the completion of any earlier write).
>
> v2: reservation_object is already locked by amdgpu_bo_reserve()
> v3: Replace looping with get_fences_rcu and special case the promotion
> of a single shared fence directly to an exclusive fence, bypassing the
> fence array.
> v4: Drop the fence array ref after assigning to reservation_object
>
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107341
> Testcase: igt/amd_prime/amd-to-i915
> References: 8e94a46c1770 ("drm/amdgpu: Attach exclusive fence to prime exported bo's. (v5)")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Alex Deucher <alexander.deucher@amd.com>
> Cc: "Christian König" <christian.koenig@amd.com>
> Reviewed-by: "Christian König" <christian.koenig@amd.com>
> ---
> We may disagree on the best long term strategy for fence semantics, but
> I think this is still a nice short term solution to the blocking
> behaviour on exporting amdgpu to prime.

Yeah, I can agree on that. And just pushed the patch to 
amd-staging-drm-next.

Christian.

> -Chris
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 59 ++++++++++++++++++++---
>   1 file changed, 51 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> index 71913a18d142..a38e0fb4a6fe 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> @@ -38,6 +38,7 @@
>   #include "amdgpu_gem.h"
>   #include <drm/amdgpu_drm.h>
>   #include <linux/dma-buf.h>
> +#include <linux/dma-fence-array.h>
>   
>   /**
>    * amdgpu_gem_prime_get_sg_table - &drm_driver.gem_prime_get_sg_table
> @@ -187,6 +188,48 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
>   	return ERR_PTR(ret);
>   }
>   
> +static int
> +__reservation_object_make_exclusive(struct reservation_object *obj)
> +{
> +	struct dma_fence **fences;
> +	unsigned int count;
> +	int r;
> +
> +	if (!reservation_object_get_list(obj)) /* no shared fences to convert */
> +		return 0;
> +
> +	r = reservation_object_get_fences_rcu(obj, NULL, &count, &fences);
> +	if (r)
> +		return r;
> +
> +	if (count == 0) {
> +		/* Now that was unexpected. */
> +	} else if (count == 1) {
> +		reservation_object_add_excl_fence(obj, fences[0]);
> +		dma_fence_put(fences[0]);
> +		kfree(fences);
> +	} else {
> +		struct dma_fence_array *array;
> +
> +		array = dma_fence_array_create(count, fences,
> +					       dma_fence_context_alloc(1), 0,
> +					       false);
> +		if (!array)
> +			goto err_fences_put;
> +
> +		reservation_object_add_excl_fence(obj, &array->base);
> +		dma_fence_put(&array->base);
> +	}
> +
> +	return 0;
> +
> +err_fences_put:
> +	while (count--)
> +		dma_fence_put(fences[count]);
> +	kfree(fences);
> +	return -ENOMEM;
> +}
> +
>   /**
>    * amdgpu_gem_map_attach - &dma_buf_ops.attach implementation
>    * @dma_buf: Shared DMA buffer
> @@ -218,16 +261,16 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
>   
>   	if (attach->dev->driver != adev->dev->driver) {
>   		/*
> -		 * Wait for all shared fences to complete before we switch to future
> -		 * use of exclusive fence on this prime shared bo.
> +		 * We only create shared fences for internal use, but importers
> +		 * of the dmabuf rely on exclusive fences for implicitly
> +		 * tracking write hazards. As any of the current fences may
> +		 * correspond to a write, we need to convert all existing
> +		 * fences on the reservation object into a single exclusive
> +		 * fence.
>   		 */
> -		r = reservation_object_wait_timeout_rcu(bo->tbo.resv,
> -							true, false,
> -							MAX_SCHEDULE_TIMEOUT);
> -		if (unlikely(r < 0)) {
> -			DRM_DEBUG_PRIME("Fence wait failed: %li\n", r);
> +		r = __reservation_object_make_exclusive(bo->tbo.resv);
> +		if (r)
>   			goto error_unreserve;
> -		}
>   	}
>   
>   	/* pin buffer into GTT */

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-01-30 12:00 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-01-30 10:55 [PATCH] drm/amdgpu: Transfer fences to dmabuf importer Chris Wilson
     [not found] ` <20190130105517.23977-1-chris-Y6uKTt2uX1cEflXRtASbqLVCufUGDwFn@public.gmane.org>
2019-01-30 12:00   ` Christian König
  -- strict thread matches above, loose matches on Subject: below --
2018-08-07 10:45 Chris Wilson
     [not found] ` <20180807104500.31264-1-chris-Y6uKTt2uX1cEflXRtASbqLVCufUGDwFn@public.gmane.org>
2018-08-07 10:56   ` Huang Rui
2018-08-07 11:05     ` Chris Wilson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).