public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH] drm/i915/guc: Defer LRC context unpin or release
@ 2015-11-06 21:56 yu.dai
  2015-11-06 22:03 ` Yu Dai
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: yu.dai @ 2015-11-06 21:56 UTC (permalink / raw)
  To: intel-gfx

From: Alex Dai <yu.dai@intel.com>

Can't immediately free LRC context (neither unpin it) even all
its referenced requests are completed, because HW still need a
short period of time to save data to LRC status page. It is safe
to free LRC when HW completes a request from a different LRC.

Move LRC to ring->last_unpin_ctx when its pin count reaches zero.
Increase ctx refcount to make sure it won't be freed immediately.
When HW complete the next request from a different LRC, release
the last yet-to-be-released one. In the case it is pinned again,
simply decrease its refcount.

Signed-off-by: Alex Dai <yu.dai@intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c        | 54 ++++++++++++++++++++++++++++-----
 drivers/gpu/drm/i915/intel_ringbuffer.h |  1 +
 2 files changed, 47 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 06180dc..da01d72 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1051,6 +1051,14 @@ static int intel_lr_context_pin(struct drm_i915_gem_request *rq)
 		if (ret)
 			goto reset_pin_count;
 	}
+
+	/* If we are holding this LRC from last unpin, unref it here. */
+	if (ring->last_unpin_ctx == rq->ctx) {
+		rq->ctx->engine[ring->id].pin_count--;
+		i915_gem_context_unreference(rq->ctx);
+		ring->last_unpin_ctx = NULL;
+	}
+
 	return ret;
 
 reset_pin_count:
@@ -1058,18 +1066,46 @@ reset_pin_count:
 	return ret;
 }
 
+static void
+lrc_unpin_last_ctx(struct intel_engine_cs *ring)
+{
+	struct intel_context *ctx = ring->last_unpin_ctx;
+	struct drm_i915_gem_object *ctx_obj;
+
+	if (!ctx)
+		return;
+
+	i915_gem_object_ggtt_unpin(ctx->engine[ring->id].state);
+	intel_unpin_ringbuffer_obj(ctx->engine[ring->id].ringbuf);
+
+	WARN_ON(--ctx->engine[ring->id].pin_count);
+	i915_gem_context_unreference(ctx);
+
+	ring->last_unpin_ctx = NULL;
+}
+
 void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
 {
 	struct intel_engine_cs *ring = rq->ring;
-	struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring->id].state;
-	struct intel_ringbuffer *ringbuf = rq->ringbuf;
 
-	if (ctx_obj) {
-		WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
-		if (--rq->ctx->engine[ring->id].pin_count == 0) {
-			intel_unpin_ringbuffer_obj(ringbuf);
-			i915_gem_object_ggtt_unpin(ctx_obj);
-		}
+	if (!rq->ctx->engine[ring->id].state)
+		return;
+
+	WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
+
+	/* HW completes request from a different LRC, unpin the last one. */
+	if (ring->last_unpin_ctx != rq->ctx)
+		lrc_unpin_last_ctx(ring);
+
+	if (--rq->ctx->engine[ring->id].pin_count == 0) {
+		/* Last one should be unpined already */
+		WARN_ON(ring->last_unpin_ctx);
+
+		/* Keep the context pinned and ref-counted */
+		rq->ctx->engine[ring->id].pin_count++;
+		i915_gem_context_reference(rq->ctx);
+
+		ring->last_unpin_ctx = rq->ctx;
 	}
 }
 
@@ -1908,6 +1944,8 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *ring)
 	}
 
 	lrc_destroy_wa_ctx_obj(ring);
+
+	lrc_unpin_last_ctx(ring);
 }
 
 static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *ring)
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 58b1976..676d27f 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -267,6 +267,7 @@ struct  intel_engine_cs {
 	spinlock_t execlist_lock;
 	struct list_head execlist_queue;
 	struct list_head execlist_retired_req_list;
+	struct intel_context *last_unpin_ctx;
 	u8 next_context_status_buffer;
 	u32             irq_keep_mask; /* bitmask for interrupts that should not be masked */
 	int		(*emit_request)(struct drm_i915_gem_request *request);
-- 
2.5.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/i915/guc: Defer LRC context unpin or release
  2015-11-06 21:56 [PATCH] drm/i915/guc: Defer LRC context unpin or release yu.dai
@ 2015-11-06 22:03 ` Yu Dai
  2015-11-06 22:09 ` Chris Wilson
  2015-11-06 22:12 ` kbuild test robot
  2 siblings, 0 replies; 5+ messages in thread
From: Yu Dai @ 2015-11-06 22:03 UTC (permalink / raw)
  To: intel-gfx

Sorry for wrong patch. Please forget about his one. Another one will be 
submitted.

Alex

On 11/06/2015 01:56 PM, yu.dai@intel.com wrote:
> From: Alex Dai <yu.dai@intel.com>
>
> Can't immediately free LRC context (neither unpin it) even all
> its referenced requests are completed, because HW still need a
> short period of time to save data to LRC status page. It is safe
> to free LRC when HW completes a request from a different LRC.
>
> Move LRC to ring->last_unpin_ctx when its pin count reaches zero.
> Increase ctx refcount to make sure it won't be freed immediately.
> When HW complete the next request from a different LRC, release
> the last yet-to-be-released one. In the case it is pinned again,
> simply decrease its refcount.
>
> Signed-off-by: Alex Dai <yu.dai@intel.com>
> ---
>   drivers/gpu/drm/i915/intel_lrc.c        | 54 ++++++++++++++++++++++++++++-----
>   drivers/gpu/drm/i915/intel_ringbuffer.h |  1 +
>   2 files changed, 47 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 06180dc..da01d72 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -1051,6 +1051,14 @@ static int intel_lr_context_pin(struct drm_i915_gem_request *rq)
>   		if (ret)
>   			goto reset_pin_count;
>   	}
> +
> +	/* If we are holding this LRC from last unpin, unref it here. */
> +	if (ring->last_unpin_ctx == rq->ctx) {
> +		rq->ctx->engine[ring->id].pin_count--;
> +		i915_gem_context_unreference(rq->ctx);
> +		ring->last_unpin_ctx = NULL;
> +	}
> +
>   	return ret;
>   
>   reset_pin_count:
> @@ -1058,18 +1066,46 @@ reset_pin_count:
>   	return ret;
>   }
>   
> +static void
> +lrc_unpin_last_ctx(struct intel_engine_cs *ring)
> +{
> +	struct intel_context *ctx = ring->last_unpin_ctx;
> +	struct drm_i915_gem_object *ctx_obj;
> +
> +	if (!ctx)
> +		return;
> +
> +	i915_gem_object_ggtt_unpin(ctx->engine[ring->id].state);
> +	intel_unpin_ringbuffer_obj(ctx->engine[ring->id].ringbuf);
> +
> +	WARN_ON(--ctx->engine[ring->id].pin_count);
> +	i915_gem_context_unreference(ctx);
> +
> +	ring->last_unpin_ctx = NULL;
> +}
> +
>   void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
>   {
>   	struct intel_engine_cs *ring = rq->ring;
> -	struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring->id].state;
> -	struct intel_ringbuffer *ringbuf = rq->ringbuf;
>   
> -	if (ctx_obj) {
> -		WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
> -		if (--rq->ctx->engine[ring->id].pin_count == 0) {
> -			intel_unpin_ringbuffer_obj(ringbuf);
> -			i915_gem_object_ggtt_unpin(ctx_obj);
> -		}
> +	if (!rq->ctx->engine[ring->id].state)
> +		return;
> +
> +	WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
> +
> +	/* HW completes request from a different LRC, unpin the last one. */
> +	if (ring->last_unpin_ctx != rq->ctx)
> +		lrc_unpin_last_ctx(ring);
> +
> +	if (--rq->ctx->engine[ring->id].pin_count == 0) {
> +		/* Last one should be unpined already */
> +		WARN_ON(ring->last_unpin_ctx);
> +
> +		/* Keep the context pinned and ref-counted */
> +		rq->ctx->engine[ring->id].pin_count++;
> +		i915_gem_context_reference(rq->ctx);
> +
> +		ring->last_unpin_ctx = rq->ctx;
>   	}
>   }
>   
> @@ -1908,6 +1944,8 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *ring)
>   	}
>   
>   	lrc_destroy_wa_ctx_obj(ring);
> +
> +	lrc_unpin_last_ctx(ring);
>   }
>   
>   static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *ring)
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 58b1976..676d27f 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -267,6 +267,7 @@ struct  intel_engine_cs {
>   	spinlock_t execlist_lock;
>   	struct list_head execlist_queue;
>   	struct list_head execlist_retired_req_list;
> +	struct intel_context *last_unpin_ctx;
>   	u8 next_context_status_buffer;
>   	u32             irq_keep_mask; /* bitmask for interrupts that should not be masked */
>   	int		(*emit_request)(struct drm_i915_gem_request *request);

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/i915/guc: Defer LRC context unpin or release
  2015-11-06 21:56 [PATCH] drm/i915/guc: Defer LRC context unpin or release yu.dai
  2015-11-06 22:03 ` Yu Dai
@ 2015-11-06 22:09 ` Chris Wilson
  2015-11-06 23:42   ` Yu Dai
  2015-11-06 22:12 ` kbuild test robot
  2 siblings, 1 reply; 5+ messages in thread
From: Chris Wilson @ 2015-11-06 22:09 UTC (permalink / raw)
  To: yu.dai; +Cc: intel-gfx

On Fri, Nov 06, 2015 at 01:56:59PM -0800, yu.dai@intel.com wrote:
> From: Alex Dai <yu.dai@intel.com>
> 
> Can't immediately free LRC context (neither unpin it) even all
> its referenced requests are completed, because HW still need a
> short period of time to save data to LRC status page. It is safe
> to free LRC when HW completes a request from a different LRC.

See the legacy context switch mechanism for code to reuse - at least
reuse the pointers rather than add yet another almost identically named
one to intel_engine_cs.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/i915/guc: Defer LRC context unpin or release
  2015-11-06 21:56 [PATCH] drm/i915/guc: Defer LRC context unpin or release yu.dai
  2015-11-06 22:03 ` Yu Dai
  2015-11-06 22:09 ` Chris Wilson
@ 2015-11-06 22:12 ` kbuild test robot
  2 siblings, 0 replies; 5+ messages in thread
From: kbuild test robot @ 2015-11-06 22:12 UTC (permalink / raw)
  To: yu.dai; +Cc: intel-gfx, kbuild-all

[-- Attachment #1: Type: text/plain, Size: 1693 bytes --]

Hi Alex,

[auto build test WARNING on drm-intel/for-linux-next]
[also build test WARNING on next-20151106]
[cannot apply to v4.3]

url:    https://github.com/0day-ci/linux/commits/yu-dai-intel-com/drm-i915-guc-Defer-LRC-context-unpin-or-release/20151107-060016
base:   git://anongit.freedesktop.org/drm-intel for-linux-next
config: x86_64-allyesconfig (attached as .config)
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All warnings (new ones prefixed by >>):

   drivers/gpu/drm/i915/intel_lrc.c: In function 'lrc_unpin_last_ctx':
>> drivers/gpu/drm/i915/intel_lrc.c:1071:30: warning: unused variable 'ctx_obj' [-Wunused-variable]
     struct drm_i915_gem_object *ctx_obj;
                                 ^

vim +/ctx_obj +1071 drivers/gpu/drm/i915/intel_lrc.c

  1055			rq->ctx->engine[ring->id].pin_count--;
  1056			i915_gem_context_unreference(rq->ctx);
  1057			ring->last_unpin_ctx = NULL;
  1058		}
  1059	
  1060		return ret;
  1061	
  1062	reset_pin_count:
  1063		rq->ctx->engine[ring->id].pin_count = 0;
  1064		return ret;
  1065	}
  1066	
  1067	static void
  1068	lrc_unpin_last_ctx(struct intel_engine_cs *ring)
  1069	{
  1070		struct intel_context *ctx = ring->last_unpin_ctx;
> 1071		struct drm_i915_gem_object *ctx_obj;
  1072	
  1073		if (!ctx)
  1074			return;
  1075	
  1076		i915_gem_object_ggtt_unpin(ctx->engine[ring->id].state);
  1077		intel_unpin_ringbuffer_obj(ctx->engine[ring->id].ringbuf);
  1078	
  1079		WARN_ON(--ctx->engine[ring->id].pin_count);

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 50361 bytes --]

[-- Attachment #3: Type: text/plain, Size: 159 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/i915/guc: Defer LRC context unpin or release
  2015-11-06 22:09 ` Chris Wilson
@ 2015-11-06 23:42   ` Yu Dai
  0 siblings, 0 replies; 5+ messages in thread
From: Yu Dai @ 2015-11-06 23:42 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx



On 11/06/2015 02:09 PM, Chris Wilson wrote:
> On Fri, Nov 06, 2015 at 01:56:59PM -0800, yu.dai@intel.com wrote:
> > From: Alex Dai <yu.dai@intel.com>
> >
> > Can't immediately free LRC context (neither unpin it) even all
> > its referenced requests are completed, because HW still need a
> > short period of time to save data to LRC status page. It is safe
> > to free LRC when HW completes a request from a different LRC.
>
> See the legacy context switch mechanism for code to reuse - at least
> reuse the pointers rather than add yet another almost identically named
> one to intel_engine_cs.
> -Chris
>
Sorry that I accidentally submitted wrong version of this patch. The 
correct one is here: https://patchwork.freedesktop.org/patch/64094/.
I use 'retired_ctx' to avoid confusing with legacy last_context in 
intel_engine_cs. The 'retired' means all gem_request ref on it have 
retired. However, the unpin or free of its backing BO is deferred until 
HW completes another batch from a different LRC. The ref/unref concept 
is similar to the legacy context switch. But I don't believe there is 
any code we can reuse here.

Thanks,
Alex
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-11-06 23:44 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-11-06 21:56 [PATCH] drm/i915/guc: Defer LRC context unpin or release yu.dai
2015-11-06 22:03 ` Yu Dai
2015-11-06 22:09 ` Chris Wilson
2015-11-06 23:42   ` Yu Dai
2015-11-06 22:12 ` kbuild test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox