From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 3/3] drm/i915: Allocate active tracking nodes from a slabcache
Date: Thu, 31 Jan 2019 11:39:46 +0000 [thread overview]
Message-ID: <7373efe2-497e-9857-b748-badb08de179f@linux.intel.com> (raw)
In-Reply-To: <20190130205039.20959-3-chris@chris-wilson.co.uk>
On 30/01/2019 20:50, Chris Wilson wrote:
> Wrap the active tracking for a GPU references in a slabcache for faster
> allocations, and hopefully better fragmentation reduction.
>
v2 where art thou? :)
> v3: Nothing device specific left, it's just a slabcache that we can
> make global.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
> drivers/gpu/drm/i915/i915_active.c | 31 +++++++++++++++++++++++++++---
> drivers/gpu/drm/i915/i915_active.h | 3 +++
> drivers/gpu/drm/i915/i915_pci.c | 3 +++
> 3 files changed, 34 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
> index b1fefe98f9a6..5818ddf88462 100644
> --- a/drivers/gpu/drm/i915/i915_active.c
> +++ b/drivers/gpu/drm/i915/i915_active.c
> @@ -9,6 +9,17 @@
>
> #define BKL(ref) (&(ref)->i915->drm.struct_mutex)
>
> +/*
> + * Active refs memory management
> + *
> + * To be more economical with memory, we reap all the i915_active trees as
> + * they idle (when we know the active requests are inactive) and allocate the
> + * nodes from a local slab cache to hopefully reduce the fragmentation.
> + */
> +static struct i915_global_active {
> + struct kmem_cache *slab_cache;
> +} global;
> +
> struct active_node {
> struct i915_gem_active base;
> struct i915_active *ref;
> @@ -23,7 +34,7 @@ __active_park(struct i915_active *ref)
>
> rbtree_postorder_for_each_entry_safe(it, n, &ref->tree, node) {
> GEM_BUG_ON(i915_gem_active_isset(&it->base));
> - kfree(it);
> + kmem_cache_free(global.slab_cache, it);
> }
> ref->tree = RB_ROOT;
> }
> @@ -96,11 +107,11 @@ active_instance(struct i915_active *ref, u64 idx)
> p = &parent->rb_left;
> }
>
> - node = kmalloc(sizeof(*node), GFP_KERNEL);
> + node = kmem_cache_alloc(global.slab_cache, GFP_KERNEL);
>
> /* kmalloc may retire the ref->last (thanks shrinker)! */
> if (unlikely(!i915_gem_active_raw(&ref->last, BKL(ref)))) {
> - kfree(node);
> + kmem_cache_free(global.slab_cache, node);
> goto out;
> }
>
> @@ -234,6 +245,20 @@ void i915_active_fini(struct i915_active *ref)
> GEM_BUG_ON(!RB_EMPTY_ROOT(&ref->tree));
> GEM_BUG_ON(ref->count);
> }
> +
> +int __init i915_global_active_init(void)
> +{
> + global.slab_cache = KMEM_CACHE(active_node, SLAB_HWCACHE_ALIGN);
> + if (!global.slab_cache)
> + return -ENOMEM;
> +
> + return 0;
> +}
> +
> +void __exit i915_global_active_exit(void)
> +{
> + kmem_cache_destroy(global.slab_cache);
> +}
> #endif
>
> #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h
> index 6130c6770d10..48fdb1497883 100644
> --- a/drivers/gpu/drm/i915/i915_active.h
> +++ b/drivers/gpu/drm/i915/i915_active.h
> @@ -63,4 +63,7 @@ void i915_active_fini(struct i915_active *ref);
> static inline void i915_active_fini(struct i915_active *ref) { }
> #endif
>
> +int i915_global_active_init(void);
> +void i915_global_active_exit(void);
> +
> #endif /* _I915_ACTIVE_H_ */
> diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
> index 44c23ac60347..751a787c83d1 100644
> --- a/drivers/gpu/drm/i915/i915_pci.c
> +++ b/drivers/gpu/drm/i915/i915_pci.c
Add explicit #include "i915_active.h"?
> @@ -793,6 +793,8 @@ static int __init i915_init(void)
> bool use_kms = true;
> int err;
>
> + i915_global_active_init();
> +
> err = i915_mock_selftests();
> if (err)
> return err > 0 ? 0 : err;
> @@ -824,6 +826,7 @@ static void __exit i915_exit(void)
> return;
>
> pci_unregister_driver(&i915_pci_driver);
> + i915_global_active_exit();
> }
>
> module_init(i915_init);
>
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2019-01-31 11:39 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-30 20:50 [PATCH 1/3] drm/i915: Generalise GPU activity tracking Chris Wilson
2019-01-30 20:50 ` [PATCH 2/3] drm/i915: Release the active tracker tree upon idling Chris Wilson
2019-01-31 11:33 ` Tvrtko Ursulin
2019-01-30 20:50 ` [PATCH 3/3] drm/i915: Allocate active tracking nodes from a slabcache Chris Wilson
2019-01-31 9:39 ` kbuild test robot
2019-01-31 9:50 ` kbuild test robot
2019-01-31 11:39 ` Tvrtko Ursulin [this message]
2019-01-31 11:41 ` Chris Wilson
2019-01-30 22:56 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/3] drm/i915: Generalise GPU activity tracking Patchwork
2019-01-30 22:58 ` ✗ Fi.CI.SPARSE: " Patchwork
2019-01-30 23:19 ` ✓ Fi.CI.BAT: success " Patchwork
2019-01-31 9:06 ` ✓ Fi.CI.IGT: " Patchwork
2019-01-31 11:25 ` [PATCH 1/3] " Tvrtko Ursulin
2019-01-31 11:32 ` Chris Wilson
2019-01-31 11:40 ` Tvrtko Ursulin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7373efe2-497e-9857-b748-badb08de179f@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox