public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>,
	Tvrtko Ursulin <tursulin@ursulin.net>,
	Intel-gfx@lists.freedesktop.org
Subject: Re: [RFC 00/17] Per-context and per-client engine busyness
Date: Thu, 26 Oct 2017 08:34:24 +0100	[thread overview]
Message-ID: <d208992c-d0d3-8db7-9599-cb8706ea7cf3@linux.intel.com> (raw)
In-Reply-To: <150895311007.2864.18313558067196575113@mail.alporthouse.com>


On 25/10/2017 18:38, Chris Wilson wrote:
> Quoting Chris Wilson (2017-10-25 16:47:13)
>> Quoting Tvrtko Ursulin (2017-10-25 16:36:15)
>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

>> I've prototyped a quick demo of intel-client-top which produces output like:
>>
>>      neverball[  6011]:  rcs0:  41.01%  bcs0:   0.00%  vcs0:   0.00%  vecs0:   0.00%
>>           Xorg[  5664]:  rcs0:  31.16%  bcs0:   0.00%  vcs0:   0.00%  vecs0:   0.00%
>>          xfwm4[  5727]:  rcs0:   0.00%  bcs0:   0.00%  vcs0:   0.00%  vecs0:   0.00%
> 
> +1
> +2 for a graph ;)

Where are those placement students when you need them! :)

>>> Another potential use for the per-client infrastructure is tieing it up with
>>> perf PMU. At the moment our perf PMU are global counters only. With the per-
>>> client infrastructure it should be possible to make it work in the task mode as
>>> well and so enable GPU busyness profiling of single tasks.
>>
>> ctx->pid can be misleading, as it set on creation, but the context can
>> be transferred over fd to the real client. (Typically that applies to
>> the default context, 0.)
> > Ok, I see that you update the pid when a new context is created. Still
> have the likes of libva that may use DRI3 without creating a context
> itself.

Hm, how rude of the protocol to provide this anonymization service!

I guess I could update on submission as well and then there is no escape.
 
> Back to the general niggle; I really would like to avoid adding custom
> i915 interfaces for this, that should be a last resort if we can find no
> way through e.g. perf.

I certainly plan to investigate adding pid filtering to the PMU. It is
supposed to be possible but I haven't tried it yet. Also not sure if it
will be exactly suitable for a top like tool. Will see if I manage to have
it working.

But what do you say about the simple per-context API (patch 13)? Do you
find using ctx get param for this acceptable or you can think of a different
way?

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2017-10-26  7:34 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-25 15:36 [RFC 00/17] Per-context and per-client engine busyness Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 01/17] drm/i915: Extract intel_get_cagf Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 02/17] drm/i915/pmu: Expose a PMU interface for perf queries Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 03/17] drm/i915/pmu: Suspend sampling when GPU is idle Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 04/17] drm/i915: Wrap context schedule notification Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 05/17] drm/i915: Engine busy time tracking Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 06/17] drm/i915/pmu: Wire up engine busy stats to PMU Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 07/17] drm/i915/pmu: Add interrupt count metric Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 08/17] drm/i915: Convert intel_rc6_residency_us to ns Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 09/17] drm/i915/pmu: Add RC6 residency metrics Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 10/17] drm/i915: Keep a count of requests waiting for a slot on GPU Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 11/17] drm/i915/pmu: Add queued counter Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 12/17] drm/i915: Track per-context engine busyness Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 13/17] drm/i915: Allow clients to query own per-engine busyness Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 14/17] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 15/17] drm/i915: Update client name on context create Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 16/17] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2017-10-25 15:36 ` [RFC 17/17] drm/i915: Add sysfs toggle to enable per-client engine stats Tvrtko Ursulin
2017-10-25 15:47 ` [RFC 00/17] Per-context and per-client engine busyness Chris Wilson
2017-10-25 17:38   ` Chris Wilson
2017-10-26  7:34     ` Tvrtko Ursulin [this message]
2017-10-26  7:51       ` Chris Wilson
2017-10-26  9:50       ` Lionel Landwerlin
2017-10-26 10:10         ` Chris Wilson
2017-10-26 13:00         ` Tvrtko Ursulin
2017-10-26 13:05           ` Chris Wilson
2017-10-26 17:13             ` Lionel Landwerlin
2017-10-26 20:11               ` Chris Wilson
2017-10-27  0:12                 ` Lionel Landwerlin
2017-10-25 17:06 ` ✗ Fi.CI.BAT: failure for " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d208992c-d0d3-8db7-9599-cb8706ea7cf3@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=chris@chris-wilson.co.uk \
    --cc=tursulin@ursulin.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox