public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Dave Gordon <david.s.gordon@intel.com>
To: Thomas Daniel <thomas.daniel@intel.com>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/i915: Reset logical ring contexts' head and tail during GPU reset
Date: Tue, 17 Feb 2015 11:34:31 +0000	[thread overview]
Message-ID: <54E32747.1040303@intel.com> (raw)
In-Reply-To: <1424103173-17387-1-git-send-email-thomas.daniel@intel.com>

On 16/02/15 16:12, Thomas Daniel wrote:
> Work was getting left behind in LRC contexts during reset.  This causes a hang
> if the GPU is reset when HEAD==TAIL because the context's ringbuffer head and
> tail don't get reset and retiring a request doesn't alter them, so the ring
> still appears full.
> 
> Added a function intel_lr_context_reset() to reset head and tail on a LRC and
> its ringbuffer.
> 
> Call intel_lr_context_reset() for each context in i915_gem_context_reset() when
> in execlists mode.
> 
> Testcase: igt/pm_rps --run-subtest reset #bdw
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88096
> Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gem_context.c |   12 +++++++----
>  drivers/gpu/drm/i915/intel_lrc.c        |   34 +++++++++++++++++++++++++++++++
>  drivers/gpu/drm/i915/intel_lrc.h        |    2 ++
>  3 files changed, 44 insertions(+), 4 deletions(-)

A couple of minor points below, but not ones that require changes, so:

Reviewed-by: Dave Gordon <david.s.gordon@intel.com>

> diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
> index 8603bf4..70346b0 100644
> --- a/drivers/gpu/drm/i915/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/i915_gem_context.c
> @@ -296,11 +296,15 @@ void i915_gem_context_reset(struct drm_device *dev)
>  	struct drm_i915_private *dev_priv = dev->dev_private;
>  	int i;
>  
> -	/* In execlists mode we will unreference the context when the execlist
> -	 * queue is cleared and the requests destroyed.
> -	 */
> -	if (i915.enable_execlists)
> +	if (i915.enable_execlists) {
> +		struct intel_context *ctx;
> +
> +		list_for_each_entry(ctx, &dev_priv->context_list, link) {
> +			intel_lr_context_reset(dev, ctx);
> +		}
> +
>  		return;
> +	}
>  
>  	for (i = 0; i < I915_NUM_RINGS; i++) {
>  		struct intel_engine_cs *ring = &dev_priv->ring[i];
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index aafcef3..1946bb9 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -1950,3 +1950,37 @@ error_unpin_ctx:
>  	drm_gem_object_unreference(&ctx_obj->base);
>  	return ret;
>  }
> +
> +void intel_lr_context_reset(struct drm_device *dev,
> +			struct intel_context *ctx)
> +{
> +	struct drm_i915_private *dev_priv = dev->dev_private;
> +	struct intel_engine_cs *ring;
> +	int i;
> +
> +	for_each_ring(ring, dev_priv, i) {
> +		struct drm_i915_gem_object *ctx_obj =
> +				ctx->engine[ring->id].state;
> +		if (ctx_obj) {
> +			struct intel_ringbuffer *ringbuf =
> +					ctx->engine[ring->id].ringbuf;
> +			uint32_t *reg_state;
> +			struct page *page;
> +
> +			if (i915_gem_object_get_pages(ctx_obj)) {
> +				WARN(1, "Failed get_pages for context obj\n");
> +				continue;
> +			}

This could be folded into a single "if (WARN_ON(...)) continue;"

> +			page = i915_gem_object_get_page(ctx_obj, 1);

Isn't it a pity that we have i915_gem_object_get_page() and
i915_gem_object_get_pages() which look so similar but do completely
different things :(

> +			reg_state = kmap_atomic(page);
> +
> +			reg_state[CTX_RING_HEAD+1] = 0;
> +			reg_state[CTX_RING_TAIL+1] = 0;
> +
> +			kunmap_atomic(reg_state);
> +
> +			ringbuf->head = 0;
> +			ringbuf->tail = 0;
> +		}
> +	}
> +}
> diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
> index f635735..5dd0eca 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.h
> +++ b/drivers/gpu/drm/i915/intel_lrc.h
> @@ -73,6 +73,8 @@ int intel_lr_context_deferred_create(struct intel_context *ctx,
>  				     struct intel_engine_cs *ring);
>  void intel_lr_context_unpin(struct intel_engine_cs *ring,
>  		struct intel_context *ctx);
> +void intel_lr_context_reset(struct drm_device *dev,
> +			struct intel_context *ctx);
>  
>  /* Execlists */
>  int intel_sanitize_enable_execlists(struct drm_device *dev, int enable_execlists);
> 

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2015-02-17 11:34 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-16 16:12 [PATCH] drm/i915: Reset logical ring contexts' head and tail during GPU reset Thomas Daniel
2015-02-17  3:08 ` shuang.he
2015-02-17 11:34 ` Dave Gordon [this message]
2015-02-23 23:21   ` Daniel Vetter
2015-02-23 23:21 ` Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54E32747.1040303@intel.com \
    --to=david.s.gordon@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=thomas.daniel@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox