From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: "Christian König" <christian.koenig@amd.com>,
Intel-gfx@lists.freedesktop.org
Cc: "Rob Clark" <robdclark@chromium.org>,
Kenny.Ho@amd.com, "Daniel Vetter" <daniel.vetter@ffwll.ch>,
"Johannes Weiner" <hannes@cmpxchg.org>,
linux-kernel@vger.kernel.org,
"Stéphane Marchesin" <marcheu@chromium.org>,
"Zefan Li" <lizefan.x@bytedance.com>,
"Dave Airlie" <airlied@redhat.com>, "Tejun Heo" <tj@kernel.org>,
cgroups@vger.kernel.org, "T . J . Mercier" <tjmercier@google.com>
Subject: Re: [Intel-gfx] [RFC 02/17] drm: Track clients per owning process
Date: Thu, 27 Oct 2022 15:35:37 +0100 [thread overview]
Message-ID: <c88d0c33-8616-faa4-b33e-5de36d7b73fd@linux.intel.com> (raw)
In-Reply-To: <04182f67-2c98-add4-be60-539ffe2e9d6a@amd.com>
On 20/10/2022 12:33, Christian König wrote:
> Am 20.10.22 um 09:34 schrieb Tvrtko Ursulin:
>>
>> On 20/10/2022 07:40, Christian König wrote:
>>> Am 19.10.22 um 19:32 schrieb Tvrtko Ursulin:
>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>
>>>> To enable propagation of settings from the cgroup drm controller to
>>>> drm we
>>>> need to start tracking which processes own which drm clients.
>>>>
>>>> Implement that by tracking the struct pid pointer of the owning
>>>> process in
>>>> a new XArray, pointing to a structure containing a list of associated
>>>> struct drm_file pointers.
>>>>
>>>> Clients are added and removed under the filelist mutex and RCU list
>>>> operations are used below it to allow for lockless lookup.
>>>
>>> That won't work easily like this. The problem is that file_priv->pid
>>> is usually not accurate these days:
>>>
>>> From the debugfs clients file:
>>>
>>> systemd-logind 773 0 y y 0 0
>>> Xorg 1639 128 n n 1000 0
>>> Xorg 1639 128 n n 1000 0
>>> Xorg 1639 128 n n 1000 0
>>> firefox 2945 128 n n 1000 0
>>> Xorg 1639 128 n n 1000 0
>>> Xorg 1639 128 n n 1000 0
>>> Xorg 1639 128 n n 1000 0
>>> Xorg 1639 128 n n 1000 0
>>> chrome 35940 128 n n 1000 0
>>> chrome 35940 0 n y 1000 1
>>> chrome 35940 0 n y 1000 2
>>> Xorg 1639 128 n n 1000 0
>>> Xorg 1639 128 n n 1000 0
>>> Xorg 1639 128 n n 1000 0
>>>
>>> This is with glxgears and a bunch other OpenGL applications running.
>>>
>>> The problem is that for most applications the X/Wayland server is now
>>> opening the render node. The only exceptions in this case are apps
>>> using DRI2 (VA-API?).
>>>
>>> I always wanted to fix this and actually track who is using the file
>>> descriptor instead of who opened it, but never had the time to do this.
>>
>> There's a patch later in the series which allows client records to be
>> migrated to a new PID, and then i915 patch to do that when fd is used
>> for context create. That approach I think worked well enough in the
>> past. So maybe it could be done in the DRM core at some suitable entry
>> point.
>
> Yeah, that makes some sense. I think you should wire that inside
> drm_ioctl(), as far as I know more or less all uses of a file descriptor
> would go through that function.
>
> And maybe make that a stand alone patch, cause that can go upstream as a
> bug fix independently if you ask me.
I've put it on my todo list to try and come up with something standalone
for this problem. Will see if I manage to send it separately or perhaps
will start the next cgroup controller RFC with those patches.
>>> I think you need to fix this problem first. And BTW: and unsigned
>>> long doesn't work as PID either with containers.
>>
>> This I am not familiar with so would like to hear more if you could
>> point me in the right direction at least.
>
> Uff, I'm the wrong person to ask stuff like that. I just can say from
> experience because I've ran into that trap as well.
>
>>
>> My assumption was that struct pid *, which is what I store in unsigned
>> long, would be unique in a system where there is a single kernel
>> running, so as long as lifetimes are correct (released from tracking
>> here when fd is closed, which is implicit on process exit) would work.
>> You are suggesting that is not so?
>
> I think you should have the pointer to struct pid directly here since
> that is a reference counted structure IIRC. But don't ask me what the
> semantics is how to get or put a reference.
Yeah I think I have all that. I track struct pid, with a reference, in
drm client, and release it when file descriptor is closed (indirectly
via the DRM close hook). All I need, I think, is for that mapping to
answer me "which drm_file objects" are in use by this struct pid
pointer. I don't see a problem with lifetimes or scope yet.
Regards,
Tvrtko
next prev parent reply other threads:[~2022-10-27 14:38 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-19 17:32 [Intel-gfx] [RFC 00/17] DRM scheduling cgroup controller Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 01/17] cgroup: Add the DRM " Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 02/17] drm: Track clients per owning process Tvrtko Ursulin
2022-10-20 6:40 ` Christian König
2022-10-20 7:34 ` Tvrtko Ursulin
2022-10-20 11:33 ` Christian König
2022-10-27 14:35 ` Tvrtko Ursulin [this message]
2022-10-19 17:32 ` [Intel-gfx] [RFC 03/17] cgroup/drm: Support cgroup priority control Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 04/17] drm/cgroup: Allow safe external access to file_priv Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 05/17] drm: Connect priority updates to drm core Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 06/17] drm: Only track clients which are providing drm_cgroup_ops Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 07/17] drm/i915: i915 priority Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 08/17] drm: Allow for migration of clients Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 09/17] cgroup/drm: Introduce weight based drm cgroup control Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 10/17] drm: Add ability to query drm cgroup GPU time Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 11/17] drm: Add over budget signalling callback Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 12/17] cgroup/drm: Client exit hook Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 13/17] cgroup/drm: Ability to periodically scan cgroups for over budget GPU usage Tvrtko Ursulin
2022-10-21 22:52 ` T.J. Mercier
2022-10-27 14:45 ` Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 14/17] cgroup/drm: Show group budget signaling capability in sysfs Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 15/17] drm/i915: Migrate client to new owner on context create Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 16/17] drm/i915: Wire up with drm controller GPU time query Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 17/17] drm/i915: Implement cgroup controller over budget throttling Tvrtko Ursulin
2022-10-19 18:45 ` [Intel-gfx] [RFC 00/17] DRM scheduling cgroup controller Tejun Heo
2022-10-27 14:32 ` Tvrtko Ursulin
2022-10-31 20:20 ` Tejun Heo
2022-11-09 16:59 ` Tvrtko Ursulin
2022-10-19 19:25 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c88d0c33-8616-faa4-b33e-5de36d7b73fd@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=Intel-gfx@lists.freedesktop.org \
--cc=Kenny.Ho@amd.com \
--cc=airlied@redhat.com \
--cc=cgroups@vger.kernel.org \
--cc=christian.koenig@amd.com \
--cc=daniel.vetter@ffwll.ch \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lizefan.x@bytedance.com \
--cc=marcheu@chromium.org \
--cc=robdclark@chromium.org \
--cc=tj@kernel.org \
--cc=tjmercier@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox