From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 6/6] drm/i915: Track the last-active inside the i915_vma
Date: Tue, 3 Jul 2018 18:40:23 +0100 [thread overview]
Message-ID: <b250d9b4-e277-96e9-a1c6-39d7acdf8254@linux.intel.com> (raw)
In-Reply-To: <20180629225419.5832-6-chris@chris-wilson.co.uk>
On 29/06/2018 23:54, Chris Wilson wrote:
> Using a VMA on more than one timeline concurrently is the exception
> rather than the rule (using it concurrently on multiple engines). As we
> expect to only use one active tracker, store the most recently used
> tracker inside the i915_vma itself and only fallback to the radixtree if
Is a rbtree now.
> we need a second or more concurrent active trackers.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
> drivers/gpu/drm/i915/i915_vma.c | 36 +++++++++++++++++++++++++++++++--
> drivers/gpu/drm/i915/i915_vma.h | 1 +
> 2 files changed, 35 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
> index 2faad2a1d00e..9b69d24c5cf1 100644
> --- a/drivers/gpu/drm/i915/i915_vma.c
> +++ b/drivers/gpu/drm/i915/i915_vma.c
> @@ -119,6 +119,12 @@ i915_vma_retire(struct i915_gem_active *base, struct i915_request *rq)
> __i915_vma_retire(active->vma, rq);
> }
>
> +static void
> +i915_vma_last_retire(struct i915_gem_active *base, struct i915_request *rq)
> +{
> + __i915_vma_retire(container_of(base, struct i915_vma, last_active), rq);
> +}
> +
> static struct i915_vma *
> vma_create(struct drm_i915_gem_object *obj,
> struct i915_address_space *vm,
> @@ -136,6 +142,7 @@ vma_create(struct drm_i915_gem_object *obj,
>
> vma->active = RB_ROOT;
>
> + init_request_active(&vma->last_active, i915_vma_last_retire);
> init_request_active(&vma->last_fence, NULL);
> vma->vm = vm;
> vma->ops = &vm->vma_ops;
> @@ -895,6 +902,15 @@ static struct i915_gem_active *lookup_active(struct i915_vma *vma, u64 idx)
> {
> struct i915_vma_active *active;
> struct rb_node **p, *parent;
> + struct i915_request *old;
> +
> + old = i915_gem_active_raw(&vma->last_active,
> + &vma->vm->i915->drm.struct_mutex);
> + if (!old || old->fence.context == idx)
> + goto out;
Put a comment for this block explaining the caching optimization.
> +
> + /* Move the currently active fence into the rbtree */
> + idx = old->fence.context;
I find "currently active fence" a bit confusing. It is just the "last
accessed/cached fence", no?
>
> parent = NULL;
> p = &vma->active.rb_node;
> @@ -903,7 +919,7 @@ static struct i915_gem_active *lookup_active(struct i915_vma *vma, u64 idx)
>
> active = rb_entry(parent, struct i915_vma_active, node);
> if (active->timeline == idx)
> - return &active->base;
> + goto replace;
>
> if (active->timeline < idx)
> p = &parent->rb_right;
> @@ -922,7 +938,18 @@ static struct i915_gem_active *lookup_active(struct i915_vma *vma, u64 idx)
> rb_link_node(&active->node, parent, p);
> rb_insert_color(&active->node, &vma->active);
>
> - return &active->base;
> +replace:
> + if (i915_gem_active_isset(&active->base)) {
> + __list_del_entry(&active->base.link);
> + vma->active_count--;
> + GEM_BUG_ON(!vma->active_count);
> + }
When does this trigger? Please put a comment for this block.
> + GEM_BUG_ON(list_empty(&vma->last_active.link));
> + list_replace_init(&vma->last_active.link, &active->base.link);
> + active->base.request = fetch_and_zero(&vma->last_active.request);
> +
> +out:
> + return &vma->last_active;
> }
>
> int i915_vma_move_to_active(struct i915_vma *vma,
> @@ -1002,6 +1029,11 @@ int i915_vma_unbind(struct i915_vma *vma)
> */
> __i915_vma_pin(vma);
>
> + ret = i915_gem_active_retire(&vma->last_active,
> + &vma->vm->i915->drm.struct_mutex);
Dang.. there goes my idea to loop/iterate purely over the tree..
> + if (ret)
> + goto unpin;
> +
> rbtree_postorder_for_each_entry_safe(active, n,
> &vma->active, node) {
> ret = i915_gem_active_retire(&active->base,
> diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
> index c297b0a0dc47..f06d66377107 100644
> --- a/drivers/gpu/drm/i915/i915_vma.h
> +++ b/drivers/gpu/drm/i915/i915_vma.h
> @@ -97,6 +97,7 @@ struct i915_vma {
>
> unsigned int active_count;
> struct rb_root active;
> + struct i915_gem_active last_active;
> struct i915_gem_active last_fence;
>
> /**
>
Okay I get the idea and it is a good optimisation for the common case.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2018-07-03 17:40 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-29 22:54 [PATCH 1/6] drm/i915: Refactor export_fence() after i915_vma_move_to_active() Chris Wilson
2018-06-29 22:54 ` [PATCH 2/6] drm/i915: Export i915_request_skip() Chris Wilson
2018-07-02 11:37 ` Tvrtko Ursulin
2018-06-29 22:54 ` [PATCH 3/6] drm/i915: Start returning an error from i915_vma_move_to_active() Chris Wilson
2018-07-02 11:41 ` Tvrtko Ursulin
2018-06-29 22:54 ` [PATCH 4/6] drm/i915: Move i915_vma_move_to_active() to i915_vma.c Chris Wilson
2018-07-02 11:41 ` Tvrtko Ursulin
2018-06-29 22:54 ` [PATCH 5/6] drm/i915: Track vma activity per fence.context, not per engine Chris Wilson
2018-07-03 17:28 ` Tvrtko Ursulin
2018-07-03 20:29 ` Chris Wilson
2018-07-04 9:43 ` Tvrtko Ursulin
2018-07-04 9:53 ` Chris Wilson
2018-07-04 9:13 ` [PATCH v3] " Chris Wilson
2018-07-04 11:19 ` Tvrtko Ursulin
2018-06-29 22:54 ` [PATCH 6/6] drm/i915: Track the last-active inside the i915_vma Chris Wilson
2018-07-03 17:40 ` Tvrtko Ursulin [this message]
2018-07-04 8:34 ` [PATCH v2] " Chris Wilson
2018-07-04 9:39 ` Tvrtko Ursulin
2018-07-04 11:34 ` Tvrtko Ursulin
2018-07-04 11:47 ` Chris Wilson
2018-07-04 12:30 ` Tvrtko Ursulin
2018-07-05 11:38 ` Tvrtko Ursulin
2018-07-05 12:02 ` Chris Wilson
2018-07-05 12:29 ` Tvrtko Ursulin
2018-07-05 12:48 ` Chris Wilson
2018-06-29 23:04 ` ✗ Fi.CI.SPARSE: warning for series starting with [1/6] drm/i915: Refactor export_fence() after i915_vma_move_to_active() Patchwork
2018-06-29 23:19 ` ✓ Fi.CI.BAT: success " Patchwork
2018-06-30 3:03 ` ✓ Fi.CI.IGT: " Patchwork
2018-07-02 11:34 ` [PATCH 1/6] " Tvrtko Ursulin
2018-07-02 11:44 ` Chris Wilson
2018-07-02 12:29 ` Tvrtko Ursulin
2018-07-04 8:52 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/6] drm/i915: Refactor export_fence() after i915_vma_move_to_active() (rev2) Patchwork
2018-07-04 8:55 ` ✗ Fi.CI.SPARSE: " Patchwork
2018-07-04 9:07 ` ✓ Fi.CI.BAT: success " Patchwork
2018-07-04 9:34 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/6] drm/i915: Refactor export_fence() after i915_vma_move_to_active() (rev3) Patchwork
2018-07-04 9:37 ` ✗ Fi.CI.SPARSE: " Patchwork
2018-07-04 9:49 ` ✓ Fi.CI.BAT: success " Patchwork
2018-07-04 10:43 ` ✓ Fi.CI.IGT: " Patchwork
-- strict thread matches above, loose matches on Subject: below --
2018-07-06 10:39 [PATCH 1/6] drm/i915: Refactor export_fence() after i915_vma_move_to_active() Chris Wilson
2018-07-06 10:39 ` [PATCH 6/6] drm/i915: Track the last-active inside the i915_vma Chris Wilson
2018-07-06 11:07 ` Tvrtko Ursulin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b250d9b4-e277-96e9-a1c6-39d7acdf8254@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).