public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Mika Kuoppala <mika.kuoppala@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 2/2] drm/i915: Recover all available ringbuffer space following reset
Date: Wed, 28 Oct 2015 17:18:06 +0000	[thread overview]
Message-ID: <5631034E.3040608@linux.intel.com> (raw)
In-Reply-To: <87egglyfh4.fsf@gaia.fi.intel.com>


On 23/10/15 12:46, Mika Kuoppala wrote:
> Chris Wilson <chris@chris-wilson.co.uk> writes:
>
>> On Fri, Oct 23, 2015 at 02:07:35PM +0300, Mika Kuoppala wrote:
>>> Chris Wilson <chris@chris-wilson.co.uk> writes:
>>>
>>>> Having flushed all requests from all queues, we know that all
>>>> ringbuffers must now be empty. However, since we do not reclaim
>>>> all space when retiring the request (to prevent HEADs colliding
>>>> with rapid ringbuffer wraparound) the amount of available space
>>>> on each ringbuffer upon reset is less than when we start. Do one
>>>> more pass over all the ringbuffers to reset the available space
>>>>
>>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>>> Cc: Arun Siluvery <arun.siluvery@linux.intel.com>
>>>> Cc: Mika Kuoppala <mika.kuoppala@intel.com>
>>>> Cc: Dave Gordon <david.s.gordon@intel.com>
>>>> ---
>>>>   drivers/gpu/drm/i915/i915_gem.c         | 14 ++++++++++++++
>>>>   drivers/gpu/drm/i915/intel_lrc.c        |  1 +
>>>>   drivers/gpu/drm/i915/intel_ringbuffer.c | 13 ++++++++++---
>>>>   drivers/gpu/drm/i915/intel_ringbuffer.h |  2 ++
>>>>   4 files changed, 27 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>>>> index 41263cd4170c..3a42c350fec9 100644
>>>> --- a/drivers/gpu/drm/i915/i915_gem.c
>>>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>>>> @@ -2738,6 +2738,8 @@ static void i915_gem_reset_ring_status(struct drm_i915_private *dev_priv,
>>>>   static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
>>>>   					struct intel_engine_cs *ring)
>>>>   {
>>>> +	struct intel_ringbuffer *buffer;
>>>> +
>>>>   	while (!list_empty(&ring->active_list)) {
>>>>   		struct drm_i915_gem_object *obj;
>>>>
>>>> @@ -2783,6 +2785,18 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
>>>>
>>>>   		i915_gem_request_retire(request);
>>>>   	}
>>>> +
>>>> +	/* Having flushed all requests from all queues, we know that all
>>>> +	 * ringbuffers must now be empty. However, since we do not reclaim
>>>> +	 * all space when retiring the request (to prevent HEADs colliding
>>>> +	 * with rapid ringbuffer wraparound) the amount of available space
>>>> +	 * upon reset is less than when we start. Do one more pass over
>>>> +	 * all the ringbuffers to reset last_retired_head.
>>>> +	 */
>>>> +	list_for_each_entry(buffer, &ring->buffers, link) {
>>>> +		buffer->last_retired_head = buffer->tail;
>>>> +		intel_ring_update_space(buffer);
>>>> +	}
>>>
>>> This is all in vain as the i915_gem_context_reset() ->
>>> intel_lr_context_reset still sets head and tail to zero.
>>>
>>> So your last_retired_head will still dangle in a pre-reset
>>> world when the rest of the ringbuf items will be set to post
>>> reset world.
>>
>> It's only setting that so that we computed the full ring space as
>> available and then we set last_retired_head back to -1. So what's
>> dangling?
>> -Chris
>
> My understanding of the ringbuffer code was dandling. It is all
> clear now. We set head = tail and thus reset the ring space to full.
>
> References: https://bugs.freedesktop.org/show_bug.cgi?id=91634
>
> should be added as this very likely fixes that one.
>
> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>

I've merged this one to try out dim - seems to have worked.

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2015-10-28 17:18 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-03 12:01 [PATCH 1/2] drm/i915: Refactor common ringbuffer allocation code Chris Wilson
2015-09-03 12:01 ` [PATCH 2/2] drm/i915: Recover all available ringbuffer space following reset Chris Wilson
2015-09-28 15:25   ` Mika Kuoppala
2015-09-28 15:43     ` Chris Wilson
2015-10-23 11:07   ` Mika Kuoppala
2015-10-23 11:12     ` Chris Wilson
2015-10-23 11:46       ` Mika Kuoppala
2015-10-28 17:18         ` Tvrtko Ursulin [this message]
2015-09-03 14:01 ` [PATCH 1/2] drm/i915: Refactor common ringbuffer allocation code Zanoni, Paulo R
2015-09-03 14:47   ` chris
2015-09-03 14:56   ` Mika Kuoppala
2015-09-04  8:17     ` Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5631034E.3040608@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=mika.kuoppala@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox