From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 3/6] drm/i915: Split the batch pool by engine
Date: Wed, 18 Mar 2015 16:51:58 +0000 [thread overview]
Message-ID: <5509AD2E.8080809@linux.intel.com> (raw)
In-Reply-To: <1425894946-10351-3-git-send-email-chris@chris-wilson.co.uk>
On 03/09/2015 09:55 AM, Chris Wilson wrote:
> I woke up one morning and found 50k objects sitting in the batch pool
> and every search seemed to iterate the entire list... Painting the
> screen in oils would provide a more fluid display.
>
> One issue with the current design is that we only check for retirements
> on the current ring when preparing to submit a new batch. This means
> that we can have thousands of "active" batches on another ring that we
> have to walk over. The simplest way to avoid that is to split the pools
> per ring and then our LRU execution ordering will also ensure that the
> inactive buffers remain at the front.
Is the retire work handler not keeping up? Regularly or under some
specific circumstances?
> v2: execlists still requires duplicate code.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
> drivers/gpu/drm/i915/i915_debugfs.c | 33 ++++++++++++++++++------------
> drivers/gpu/drm/i915/i915_dma.c | 1 -
> drivers/gpu/drm/i915/i915_drv.h | 8 --------
> drivers/gpu/drm/i915/i915_gem.c | 2 --
> drivers/gpu/drm/i915/i915_gem_batch_pool.c | 3 ++-
> drivers/gpu/drm/i915/i915_gem_execbuffer.c | 3 +--
> drivers/gpu/drm/i915/intel_lrc.c | 1 +
> drivers/gpu/drm/i915/intel_ringbuffer.c | 2 ++
> drivers/gpu/drm/i915/intel_ringbuffer.h | 8 ++++++++
> 9 files changed, 34 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index d2e6475b0466..d8ca5c89647c 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -377,13 +377,17 @@ static void print_batch_pool_stats(struct seq_file *m,
> {
> struct drm_i915_gem_object *obj;
> struct file_stats stats;
> + struct intel_engine_cs *ring;
> + int i;
>
> memset(&stats, 0, sizeof(stats));
>
> - list_for_each_entry(obj,
> - &dev_priv->mm.batch_pool.cache_list,
> - batch_pool_list)
> - per_file_stats(0, obj, &stats);
> + for_each_ring(ring, dev_priv, i) {
> + list_for_each_entry(obj,
> + &ring->batch_pool.cache_list,
> + batch_pool_list)
> + per_file_stats(0, obj, &stats);
> + }
>
> print_file_stats(m, "batch pool", stats);
> }
> @@ -619,21 +623,24 @@ static int i915_gem_batch_pool_info(struct seq_file *m, void *data)
> struct drm_device *dev = node->minor->dev;
> struct drm_i915_private *dev_priv = dev->dev_private;
> struct drm_i915_gem_object *obj;
> + struct intel_engine_cs *ring;
> int count = 0;
> - int ret;
> + int ret, i;
>
> ret = mutex_lock_interruptible(&dev->struct_mutex);
> if (ret)
> return ret;
>
> - seq_puts(m, "cache:\n");
> - list_for_each_entry(obj,
> - &dev_priv->mm.batch_pool.cache_list,
> - batch_pool_list) {
> - seq_puts(m, " ");
> - describe_obj(m, obj);
> - seq_putc(m, '\n');
> - count++;
> + for_each_ring(ring, dev_priv, i) {
> + seq_printf(m, "%s cache:\n", ring->name);
> + list_for_each_entry(obj,
> + &ring->batch_pool.cache_list,
> + batch_pool_list) {
> + seq_puts(m, " ");
> + describe_obj(m, obj);
> + seq_putc(m, '\n');
> + count++;
> + }
> }
>
> seq_printf(m, "total: %d\n", count);
> diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
> index 8e914303b831..70e050ac02b4 100644
> --- a/drivers/gpu/drm/i915/i915_dma.c
> +++ b/drivers/gpu/drm/i915/i915_dma.c
> @@ -1062,7 +1062,6 @@ int i915_driver_unload(struct drm_device *dev)
>
> mutex_lock(&dev->struct_mutex);
> i915_gem_cleanup_ringbuffer(dev);
> - i915_gem_batch_pool_fini(&dev_priv->mm.batch_pool);
> i915_gem_context_fini(dev);
> mutex_unlock(&dev->struct_mutex);
> i915_gem_cleanup_stolen(dev);
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index b62e3c478221..9fe1274cfe59 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -37,7 +37,6 @@
> #include "intel_bios.h"
> #include "intel_ringbuffer.h"
> #include "intel_lrc.h"
> -#include "i915_gem_batch_pool.h"
> #include "i915_gem_gtt.h"
> #include "i915_gem_render_state.h"
> #include <linux/io-mapping.h>
> @@ -1149,13 +1148,6 @@ struct i915_gem_mm {
> */
> struct list_head unbound_list;
>
> - /*
> - * A pool of objects to use as shadow copies of client batch buffers
> - * when the command parser is enabled. Prevents the client from
> - * modifying the batch contents after software parsing.
> - */
> - struct i915_gem_batch_pool batch_pool;
> -
> /** Usable portion of the GTT for GEM */
> unsigned long stolen_base; /* limited to low memory (32-bit) */
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 4d8fb3f4a7a1..9f33005bdfd1 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -5039,8 +5039,6 @@ i915_gem_load(struct drm_device *dev)
> dev_priv->mm.oom_notifier.notifier_call = i915_gem_shrinker_oom;
> register_oom_notifier(&dev_priv->mm.oom_notifier);
>
> - i915_gem_batch_pool_init(dev, &dev_priv->mm.batch_pool);
> -
> mutex_init(&dev_priv->fb_tracking.lock);
> }
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_batch_pool.c b/drivers/gpu/drm/i915/i915_gem_batch_pool.c
> index 21f3356cc0ab..1287abf55b84 100644
> --- a/drivers/gpu/drm/i915/i915_gem_batch_pool.c
> +++ b/drivers/gpu/drm/i915/i915_gem_batch_pool.c
> @@ -96,8 +96,9 @@ i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool,
>
> list_for_each_entry_safe(tmp, next,
> &pool->cache_list, batch_pool_list) {
> + /* The batches are strictly LRU ordered */
> if (tmp->active)
> - continue;
> + break;
This hunk would be a better fit for 2/6, correct?
> /* While we're looping, do some clean up */
> if (tmp->madv == __I915_MADV_PURGED) {
> diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> index 36f8f05928d1..8129a8c75a21 100644
> --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> @@ -1152,12 +1152,11 @@ i915_gem_execbuffer_parse(struct intel_engine_cs *ring,
> u32 batch_len,
> bool is_master)
> {
> - struct drm_i915_private *dev_priv = to_i915(batch_obj->base.dev);
> struct drm_i915_gem_object *shadow_batch_obj;
> struct i915_vma *vma;
> int ret;
>
> - shadow_batch_obj = i915_gem_batch_pool_get(&dev_priv->mm.batch_pool,
> + shadow_batch_obj = i915_gem_batch_pool_get(&ring->batch_pool,
> PAGE_ALIGN(batch_len));
> if (IS_ERR(shadow_batch_obj))
> return shadow_batch_obj;
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index fcb074bd55dc..a6d4d5502188 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -1421,6 +1421,7 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin
> ring->dev = dev;
> INIT_LIST_HEAD(&ring->active_list);
> INIT_LIST_HEAD(&ring->request_list);
> + i915_gem_batch_pool_init(dev, &ring->batch_pool);
> init_waitqueue_head(&ring->irq_queue);
>
> INIT_LIST_HEAD(&ring->execlist_queue);
Cleanup part for _lrc ?
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2015-03-18 16:52 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-09 9:55 [PATCH 1/6] drm/i915: Split i915_gem_batch_pool into its own header Chris Wilson
2015-03-09 9:55 ` [PATCH 2/6] drm/i915: Tidy batch pool logic Chris Wilson
2015-03-17 17:42 ` Tvrtko Ursulin
2015-03-17 20:53 ` Chris Wilson
2015-03-09 9:55 ` [PATCH 3/6] drm/i915: Split the batch pool by engine Chris Wilson
2015-03-18 16:51 ` Tvrtko Ursulin [this message]
2015-03-18 17:27 ` Chris Wilson
2015-03-18 17:33 ` Tvrtko Ursulin
2015-03-19 9:36 ` Tvrtko Ursulin
2015-03-19 10:06 ` Chris Wilson
2015-03-19 11:39 ` Tvrtko Ursulin
2015-03-19 11:46 ` Chris Wilson
2015-03-19 11:58 ` Tvrtko Ursulin
2015-03-19 12:04 ` Chris Wilson
2015-03-19 14:01 ` Tvrtko Ursulin
2015-03-19 14:34 ` Chris Wilson
2015-03-09 9:55 ` [PATCH 4/6] drm/i915: Free batch pool when idle Chris Wilson
2015-03-19 17:03 ` Tvrtko Ursulin
2015-03-19 17:14 ` Chris Wilson
2015-03-19 17:27 ` Tvrtko Ursulin
2015-03-09 9:55 ` [PATCH 5/6] drm/i915: Split batch pool into size buckets Chris Wilson
2015-03-19 17:35 ` Tvrtko Ursulin
2015-03-19 21:09 ` Chris Wilson
2015-03-20 9:25 ` Tvrtko Ursulin
2015-03-09 9:55 ` [PATCH 6/6] drm/i915: Include active flag when describing objects in debugfs Chris Wilson
2015-03-09 14:59 ` shuang.he
2015-03-19 17:41 ` Tvrtko Ursulin
2015-03-19 21:05 ` Chris Wilson
2015-03-20 9:23 ` Tvrtko Ursulin
2015-03-19 17:37 ` [PATCH 1/6] drm/i915: Split i915_gem_batch_pool into its own header Tvrtko Ursulin
2015-03-19 21:06 ` Chris Wilson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5509AD2E.8080809@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox