From: "Christian König" <christian.koenig@amd.com>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
"Nieto, David M" <David.Nieto@amd.com>,
Daniel Vetter <daniel@ffwll.ch>
Cc: Alex Deucher <alexdeucher@gmail.com>,
Intel Graphics Development <Intel-gfx@lists.freedesktop.org>,
Maling list - DRI developers <dri-devel@lists.freedesktop.org>
Subject: Re: [Intel-gfx] [PATCH 0/7] Per client engine busyness
Date: Tue, 18 May 2021 14:06:53 +0200 [thread overview]
Message-ID: <883336ef-749f-6c6e-287a-e14b652722a1@amd.com> (raw)
In-Reply-To: <c306eac1-d53f-a749-6aac-3f4f066031cb@linux.intel.com>
Am 18.05.21 um 11:35 schrieb Tvrtko Ursulin:
>
> On 17/05/2021 19:02, Nieto, David M wrote:
>> [AMD Official Use Only]
>>
>>
>> The format is simple:
>>
>> <ringname><index>: <XXX.XX> %
>
> Hm what time period does the percent relate to?
>
> The i915 implementation uses accumulated nanoseconds active. That way
> who reads the file can calculate the percentage relative to the time
> period between two reads of the file.
That sounds much saner to me as well. The percentage calculation inside
the kernel looks suspiciously misplaced.
>
>> we also have entries for the memory mapped:
>> mem <ttm pool> : <size> KiB
>
> Okay so in general key values per line in text format. Colon as
> delimiter.
>
> What common fields could be useful between different drivers and what
> common naming scheme, in order to enable as easy as possible creation
> of a generic top-like tool?
>
> driver: <ko name>
> pdev: <pci slot>
> ring-<name>: N <unit>
> ...
> mem-<name>: N <unit>
> ...
>
> What else?
> Is ring a good common name? We actually more use engine in i915 but I
> am not really bothered about it.
I would prefer engine as well. We are currently in the process of moving
away from kernel rings, so that notion doesn't make much sense to keep
forward.
Christian.
>
> Aggregated GPU usage could be easily and generically done by userspace
> by adding all rings and normalizing.
>
>> On my submission
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Farchives%2Famd-gfx%2F2021-May%2F063149.html&data=04%7C01%7CChristian.Koenig%40amd.com%7Cbad72cde9a7248b20c7f08d919e03deb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637569273164210285%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=TW3HaPkqyr6jwhTUVRue3fGTyRfV4KnhEuRtTTI5fMY%3D&reserved=0
>> <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Farchives%2Famd-gfx%2F2021-May%2F063149.html&data=04%7C01%7CChristian.Koenig%40amd.com%7Cbad72cde9a7248b20c7f08d919e03deb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637569273164210285%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=TW3HaPkqyr6jwhTUVRue3fGTyRfV4KnhEuRtTTI5fMY%3D&reserved=0> I
>> added a python script to print out the info. It has a CPU usage lower
>> that top, for example.
>>
>> To be absolutely honest, I agree that there is an overhead, but It
>> might not be as much as you fear.
>
> For me more the issue is that the extra number of operations grows
> with the number of open files on the system, which has no relation to
> the number of drm clients.
>
> Extra so if the monitoring tool wants to show _only_ DRM processes.
> Then the cost scales with total number of processes time total number
> of files on the server.
>
> This design inefficiency bothers me yes. This is somewhat alleviated
> by the proposal from Chris
> (https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fpatch%2F419042%2F%3Fseries%3D86692%26rev%3D1&data=04%7C01%7CChristian.Koenig%40amd.com%7Cbad72cde9a7248b20c7f08d919e03deb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637569273164210285%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=jNfe8h2BalOOc1Y0Idcs3wxnNOi74XhulkRlebmpgJM%3D&reserved=0)
> although there are downsides there as well. Like needing to keep a map
> of pids to drm files in drivers.
>
> Btw what do you do in that tool for same fd in a multi-threaded process
> or so? Do you show duplicate entries or detect and ignore? I guess I
> did not figure out if you show by pid/tgid or by fd.
>
> Regards,
>
> Tvrtko
>
>> ------------------------------------------------------------------------
>> *From:* Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
>> *Sent:* Monday, May 17, 2021 9:00 AM
>> *To:* Nieto, David M <David.Nieto@amd.com>; Daniel Vetter
>> <daniel@ffwll.ch>; Koenig, Christian <Christian.Koenig@amd.com>
>> *Cc:* Alex Deucher <alexdeucher@gmail.com>; Intel Graphics
>> Development <Intel-gfx@lists.freedesktop.org>; Maling list - DRI
>> developers <dri-devel@lists.freedesktop.org>
>> *Subject:* Re: [PATCH 0/7] Per client engine busyness
>>
>> On 17/05/2021 15:39, Nieto, David M wrote:
>>> [AMD Official Use Only]
>>>
>>>
>>> Maybe we could try to standardize how the different submission ring
>>> usage gets exposed in the fdinfo? We went the simple way of just
>>> adding name and index, but if someone has a suggestion on how else
>>> we could format them so there is commonality across vendors we could
>>> just amend those.
>>
>> Could you paste an example of your format?
>>
>> Standardized fdinfo sounds good to me in principle. But I would also
>> like people to look at the procfs proposal from Chris,
>> - link to which I have pasted elsewhere in the thread.
>>
>> Only potential issue with fdinfo I see at the moment is a bit of an
>> extra cost in DRM client discovery (compared to my sysfs series and also
>> procfs RFC from Chris). It would require reading all processes (well
>> threads, then maybe aggregating threads into parent processes), all fd
>> symlinks, and doing a stat on them to figure out which ones are DRM
>> devices.
>>
>> Btw is DRM_MAJOR 226 consider uapi? I don't see it in uapi headers.
>>
>>> I’d really like to have the process managers tools display GPU usage
>>> regardless of what vendor is installed.
>>
>> Definitely.
>>
>> Regards,
>>
>> Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2021-05-18 12:07 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-13 10:59 [Intel-gfx] [PATCH 0/7] Per client engine busyness Tvrtko Ursulin
2021-05-13 10:59 ` [Intel-gfx] [PATCH 1/7] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2021-05-13 10:59 ` [Intel-gfx] [PATCH 2/7] drm/i915: Update client name on context create Tvrtko Ursulin
2021-05-13 10:59 ` [Intel-gfx] [PATCH 3/7] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2021-05-13 10:59 ` [Intel-gfx] [PATCH 4/7] drm/i915: Track runtime spent in closed and unreachable GEM contexts Tvrtko Ursulin
2021-05-13 11:00 ` [Intel-gfx] [PATCH 5/7] drm/i915: Track all user contexts per client Tvrtko Ursulin
2021-05-13 11:00 ` [Intel-gfx] [PATCH 6/7] drm/i915: Track context current active time Tvrtko Ursulin
2021-05-13 11:00 ` [Intel-gfx] [PATCH 7/7] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2021-05-13 11:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness Patchwork
2021-05-13 11:30 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-05-13 11:59 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-05-13 15:48 ` [Intel-gfx] [PATCH 0/7] " Alex Deucher
2021-05-13 16:40 ` Tvrtko Ursulin
2021-05-14 5:58 ` Alex Deucher
2021-05-14 7:22 ` Nieto, David M
2021-05-14 8:04 ` Christian König
2021-05-14 13:42 ` Tvrtko Ursulin
2021-05-14 13:53 ` Christian König
2021-05-14 14:47 ` Tvrtko Ursulin
2021-05-14 14:56 ` Christian König
2021-05-14 15:03 ` Tvrtko Ursulin
2021-05-14 15:10 ` Christian König
2021-05-17 14:30 ` Daniel Vetter
2021-05-17 14:39 ` Nieto, David M
2021-05-17 16:00 ` Tvrtko Ursulin
2021-05-17 18:02 ` Nieto, David M
2021-05-17 18:16 ` Nieto, David M
2021-05-17 19:03 ` Simon Ser
2021-05-18 9:08 ` Tvrtko Ursulin
2021-05-18 9:16 ` Daniel Stone
2021-05-18 9:40 ` Tvrtko Ursulin
2021-05-19 16:16 ` Tvrtko Ursulin
2021-05-19 18:23 ` Daniel Vetter
2021-05-19 23:17 ` Nieto, David M
2021-05-20 14:11 ` Daniel Vetter
2021-05-20 14:12 ` Christian König
2021-05-20 14:17 ` [Intel-gfx] [Nouveau] " arabek
2021-05-20 8:35 ` [Intel-gfx] " Tvrtko Ursulin
2021-05-24 10:48 ` Tvrtko Ursulin
2021-05-18 9:35 ` Tvrtko Ursulin
2021-05-18 12:06 ` Christian König [this message]
2021-05-17 19:16 ` Christian König
2021-06-28 10:16 ` Tvrtko Ursulin
2021-06-28 14:37 ` Daniel Vetter
2021-05-15 10:40 ` Maxime Schmitt
2021-05-17 16:13 ` Tvrtko Ursulin
2021-05-17 14:20 ` Daniel Vetter
2021-05-13 16:38 ` [Intel-gfx] ✗ Fi.CI.IGT: failure for " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=883336ef-749f-6c6e-287a-e14b652722a1@amd.com \
--to=christian.koenig@amd.com \
--cc=David.Nieto@amd.com \
--cc=Intel-gfx@lists.freedesktop.org \
--cc=alexdeucher@gmail.com \
--cc=daniel@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=tvrtko.ursulin@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox