From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Subject: Re: [PATCH v2] drm/i915: Reduce context HW ID lifetime
Date: Wed, 5 Sep 2018 11:55:38 +0100 [thread overview]
Message-ID: <f5b66616-9032-3f3d-95ac-abfc1189c752@linux.intel.com> (raw)
In-Reply-To: <153614361177.28292.12286916946922087900@skylake-alporthouse-com>
On 05/09/2018 11:33, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2018-09-05 10:49:02)
>>
>> On 04/09/2018 16:31, Chris Wilson wrote:
>>> Future gen reduce the number of bits we will have available to
>>> differentiate between contexts, so reduce the lifetime of the ID
>>> assignment from that of the context to its current active cycle (i.e.
>>> only while it is pinned for use by the HW, will it have a constant ID).
>>> This means that instead of a max of 2k allocated contexts (worst case
>>> before fun with bit twiddling), we instead have a limit of 2k in flight
>>> contexts (minus a few that have been pinned by the kernel or by perf).
>>>
>>> To reduce the number of contexts id we require, we allocate a context id
>>> on first and mark it as pinned for as long as the GEM context itself is,
>>> that is we keep it pinned it while active on each engine. If we exhaust
>>> our context id space, then we try to reclaim an id from an idle context.
>>> In the extreme case where all context ids are pinned by active contexts,
>>> we force the system to idle in order to recover ids.
>>>
>>> We cannot reduce the scope of an HW-ID to an engine (allowing the same
>>> gem_context to have different ids on each engine) as in the future we
>>> will need to preassign an id before we know which engine the
>>> context is being executed on.
>>>
>>> v2: Improved commentary (Tvrtko) [I tried at least]
>>>
>>> References: https://bugs.freedesktop.org/show_bug.cgi?id=107788
>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
>>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> Cc: Mika Kuoppala <mika.kuoppala@intel.com>
>>> Cc: Michel Thierry <michel.thierry@intel.com>
>>> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>>> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>>> ---
>>> drivers/gpu/drm/i915/i915_debugfs.c | 5 +-
>>> drivers/gpu/drm/i915/i915_drv.h | 2 +
>>> drivers/gpu/drm/i915/i915_gem_context.c | 222 +++++++++++++-----
>>> drivers/gpu/drm/i915/i915_gem_context.h | 23 ++
>>> drivers/gpu/drm/i915/intel_lrc.c | 8 +
>>> drivers/gpu/drm/i915/selftests/mock_context.c | 11 +-
>>> 6 files changed, 201 insertions(+), 70 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
>>> index 4ad0e2ed8610..1f7051e97afb 100644
>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>> @@ -1953,7 +1953,10 @@ static int i915_context_status(struct seq_file *m, void *unused)
>>> return ret;
>>>
>>> list_for_each_entry(ctx, &dev_priv->contexts.list, link) {
>>> - seq_printf(m, "HW context %u ", ctx->hw_id);
>>> + seq_puts(m, "HW context ");
>>> + if (!list_empty(&ctx->hw_id_link))
>>> + seq_printf(m, "%x [pin %u]", ctx->hw_id,
>>> + atomic_read(&ctx->hw_id_pin_count));
>>
>> Do you want to put some marker for the unallocated case here?
>
> I was content with absence of marker as indicating it has never had an
> id, or it had been revoked.
>
> Who reads this file anyway? There's not one igt where I've thought this
> would be useful debug info. Maybe I'm wrong...
True, the file as it is looks weak. It was only a question/suggestion
anyway. Perhaps we can later improve the file to list them in a more
modern/meaningful way.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2018-09-05 10:55 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-30 10:24 [PATCH] drm/i915: Reduce context HW ID lifetime Chris Wilson
2018-08-30 12:38 ` ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Reduce context HW ID lifetime (rev2) Patchwork
2018-08-30 12:39 ` ✗ Fi.CI.SPARSE: " Patchwork
2018-08-30 12:58 ` ✓ Fi.CI.BAT: success " Patchwork
2018-08-30 16:23 ` [PATCH] drm/i915: Reduce context HW ID lifetime Tvrtko Ursulin
2018-08-31 12:36 ` Chris Wilson
2018-09-03 9:59 ` Tvrtko Ursulin
2018-09-04 13:48 ` Chris Wilson
2018-08-30 17:10 ` ✓ Fi.CI.IGT: success for drm/i915: Reduce context HW ID lifetime (rev2) Patchwork
2018-09-04 15:31 ` [PATCH v2] drm/i915: Reduce context HW ID lifetime Chris Wilson
2018-09-05 9:49 ` Tvrtko Ursulin
2018-09-05 10:33 ` Chris Wilson
2018-09-05 10:55 ` Tvrtko Ursulin [this message]
2018-09-05 11:01 ` Chris Wilson
2018-09-04 16:02 ` ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Reduce context HW ID lifetime (rev3) Patchwork
2018-09-04 16:03 ` ✗ Fi.CI.SPARSE: " Patchwork
2018-09-04 16:19 ` ✓ Fi.CI.BAT: success " Patchwork
2018-09-04 22:28 ` ✓ Fi.CI.IGT: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f5b66616-9032-3f3d-95ac-abfc1189c752@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
--cc=mika.kuoppala@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox