public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Keith Packard <keithp@keithp.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: Re: [PATCH] drm/i915: Seperate fence pin counting from normal bind pin counting
Date: Sat, 04 Jun 2011 10:38:07 -0700	[thread overview]
Message-ID: <yun4o45pke8.fsf@aiko.keithp.com> (raw)
In-Reply-To: <1307177743-3195-1-git-send-email-chris@chris-wilson.co.uk>


[-- Attachment #1.1: Type: text/plain, Size: 1319 bytes --]

On Sat,  4 Jun 2011 09:55:43 +0100, Chris Wilson <chris@chris-wilson.co.uk> wrote:

> In order to correctly account for reserving space in the GTT and fences
> for a batch buffer, we need to independently track whether the fence is
> pinned due to a fenced GPU access in the batch from from whether the
> buffer is pinned in the aperture. Currently we count the fenced as
> pinned if the buffer has already been seen in the execbuffer. This leads
> to a false accounting of available fence registers, causing frequent
> mass evictions. Worse, if coupled with the change to make
> i915_gem_object_get_fence() report EDADLK upon fence starvation, the
> batchbuffer can fail with only one fence required...

I'm afraid you've completely lost me here. Can you provide a small
example (libdrm?) program which exhibits the failure so I can follow
what the problem is?

And, if I understand any of this at all, I should remove the patch to
return -EDEADLK from i915_gem_object_get_fence as we may run out of
fence registers even if the client is accounting for them correctly. If
so, I'll remove that from my list of -fixes patches.

As this is a performance optimization, I also expect to see convincing
benchmark data before this patch could be considered for merging.

-- 
keith.packard@intel.com

[-- Attachment #1.2: Type: application/pgp-signature, Size: 189 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2011-06-04 17:38 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-04  8:55 [PATCH] drm/i915: Seperate fence pin counting from normal bind pin counting Chris Wilson
2011-06-04 17:38 ` Keith Packard [this message]
2011-06-04 18:31   ` Chris Wilson
2011-06-05  1:07     ` Keith Packard
2011-06-04 22:18   ` Chris Wilson
2011-06-05 20:55 ` Daniel Vetter
2011-06-06  6:10   ` [PATCH] drm/i915: Separate " Chris Wilson
2011-09-02 18:43     ` Eugeni Dodonov
2011-11-13 11:00 ` [PATCH] drm/i915: Seperate " Chris Wilson
2011-11-13 11:21   ` Paul Menzel
2011-11-23 12:38   ` Daniel Vetter
2011-11-23 13:04     ` [PATCH] drm/i915: Separate " Chris Wilson
2012-01-29 17:25       ` Daniel Vetter
  -- strict thread matches above, loose matches on Subject: below --
2011-07-09  8:25 [PATCH] drm/i915: Seperate " Chris Wilson
2011-07-09 10:23 ` Paul Menzel
2011-07-09 10:32   ` Chris Wilson
2011-11-23 10:49 ` Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=yun4o45pke8.fsf@aiko.keithp.com \
    --to=keithp@keithp.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=daniel.vetter@ffwll.ch \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox