Intel-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	"Nieto, David M" <David.Nieto@amd.com>,
	Alex Deucher <alexdeucher@gmail.com>
Cc: Intel Graphics Development <Intel-gfx@lists.freedesktop.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>
Subject: Re: [Intel-gfx] [PATCH 0/7] Per client engine busyness
Date: Fri, 14 May 2021 17:10:29 +0200	[thread overview]
Message-ID: <a2b03603-eb3e-7bef-a799-c15cfb1a8e0b@amd.com> (raw)
In-Reply-To: <6cf2f14a-6a16-5ea3-d307-004faad4cc79@linux.intel.com>

Am 14.05.21 um 17:03 schrieb Tvrtko Ursulin:
>
> On 14/05/2021 15:56, Christian König wrote:
>> Am 14.05.21 um 16:47 schrieb Tvrtko Ursulin:
>>>
>>> On 14/05/2021 14:53, Christian König wrote:
>>>>>
>>>>> David also said that you considered sysfs but were wary of 
>>>>> exposing process info in there. To clarify, my patch is not 
>>>>> exposing sysfs entry per process, but one per open drm fd.
>>>>>
>>>>
>>>> Yes, we discussed this as well, but then rejected the approach.
>>>>
>>>> To have useful information related to the open drm fd you need to 
>>>> related that to process(es) which have that file descriptor open. 
>>>> Just tracking who opened it first like DRM does is pretty useless 
>>>> on modern systems.
>>>
>>> We do update the pid/name for fds passed over unix sockets.
>>
>> Well I just double checked and that is not correct.
>>
>> Could be that i915 has some special code for that, but on my laptop I 
>> only see the X server under the "clients" debugfs file.
>
> Yes we have special code in i915 for this. Part of this series we are 
> discussing here.

Ah, yeah you should mention that. Could we please separate that into 
common code instead? Cause I really see that as a bug in the current 
handling independent of the discussion here.

As far as I know all IOCTLs go though some common place in DRM anyway.

>>>> But an "lsof /dev/dri/renderD128" for example does exactly what top 
>>>> does as well, it iterates over /proc and sees which process has 
>>>> that file open.
>>>
>>> Lsof is quite inefficient for this use case. It has to open _all_ 
>>> open files for _all_ processes on the system to find a handful of 
>>> ones which may have the DRM device open.
>>
>> Completely agree.
>>
>> The key point is you either need to have all references to an open 
>> fd, or at least track whoever last used that fd.
>>
>> At least the last time I looked even the fs layer didn't know which 
>> fd is open by which process. So there wasn't really any alternative 
>> to the lsof approach.
>
> I asked you about the use case you have in mind which you did not 
> answer. Otherwise I don't understand when do you need to walk all 
> files. What information you want to get?

Per fd debugging information, e.g. instead of the top use case you know 
which process you want to look at.

>
> For the use case of knowing which DRM file is using how much GPU time 
> on engine X we do not need to walk all open files either with my sysfs 
> approach or the proc approach from Chris. (In the former case we 
> optionally aggregate by PID at presentation time, and in the latter 
> case aggregation is implicit.)

I'm unsure if we should go with the sysfs, proc or some completely 
different approach.

In general it would be nice to have a way to find all the fd references 
for an open inode.

Regards,
Christian.

>
> Regards,
>
> Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2021-05-14 15:10 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-13 10:59 [Intel-gfx] [PATCH 0/7] Per client engine busyness Tvrtko Ursulin
2021-05-13 10:59 ` [Intel-gfx] [PATCH 1/7] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2021-05-13 10:59 ` [Intel-gfx] [PATCH 2/7] drm/i915: Update client name on context create Tvrtko Ursulin
2021-05-13 10:59 ` [Intel-gfx] [PATCH 3/7] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2021-05-13 10:59 ` [Intel-gfx] [PATCH 4/7] drm/i915: Track runtime spent in closed and unreachable GEM contexts Tvrtko Ursulin
2021-05-13 11:00 ` [Intel-gfx] [PATCH 5/7] drm/i915: Track all user contexts per client Tvrtko Ursulin
2021-05-13 11:00 ` [Intel-gfx] [PATCH 6/7] drm/i915: Track context current active time Tvrtko Ursulin
2021-05-13 11:00 ` [Intel-gfx] [PATCH 7/7] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2021-05-13 11:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness Patchwork
2021-05-13 11:30 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-05-13 11:59 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-05-13 15:48 ` [Intel-gfx] [PATCH 0/7] " Alex Deucher
2021-05-13 16:40   ` Tvrtko Ursulin
2021-05-14  5:58     ` Alex Deucher
2021-05-14  7:22       ` Nieto, David M
2021-05-14  8:04         ` Christian König
2021-05-14 13:42           ` Tvrtko Ursulin
2021-05-14 13:53             ` Christian König
2021-05-14 14:47               ` Tvrtko Ursulin
2021-05-14 14:56                 ` Christian König
2021-05-14 15:03                   ` Tvrtko Ursulin
2021-05-14 15:10                     ` Christian König [this message]
2021-05-17 14:30                       ` Daniel Vetter
2021-05-17 14:39                         ` Nieto, David M
2021-05-17 16:00                           ` Tvrtko Ursulin
2021-05-17 18:02                             ` Nieto, David M
2021-05-17 18:16                               ` Nieto, David M
2021-05-17 19:03                                 ` Simon Ser
2021-05-18  9:08                                   ` Tvrtko Ursulin
2021-05-18  9:16                                     ` Daniel Stone
2021-05-18  9:40                                       ` Tvrtko Ursulin
2021-05-19 16:16                                         ` Tvrtko Ursulin
2021-05-19 18:23                                           ` Daniel Vetter
2021-05-19 23:17                                             ` Nieto, David M
2021-05-20 14:11                                               ` Daniel Vetter
2021-05-20 14:12                                                 ` Christian König
2021-05-20 14:17                                                   ` [Intel-gfx] [Nouveau] " arabek
2021-05-20  8:35                                             ` [Intel-gfx] " Tvrtko Ursulin
2021-05-24 10:48                                               ` Tvrtko Ursulin
2021-05-18  9:35                               ` Tvrtko Ursulin
2021-05-18 12:06                                 ` Christian König
2021-05-17 19:16                         ` Christian König
2021-06-28 10:16                       ` Tvrtko Ursulin
2021-06-28 14:37                         ` Daniel Vetter
2021-05-15 10:40                     ` Maxime Schmitt
2021-05-17 16:13                       ` Tvrtko Ursulin
2021-05-17 14:20   ` Daniel Vetter
2021-05-13 16:38 ` [Intel-gfx] ✗ Fi.CI.IGT: failure for " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a2b03603-eb3e-7bef-a799-c15cfb1a8e0b@amd.com \
    --to=christian.koenig@amd.com \
    --cc=David.Nieto@amd.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=alexdeucher@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox