From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
To: Tejun Heo <tj@kernel.org>
Cc: "Tvrtko Ursulin" <tvrtko.ursulin@linux.intel.com>,
Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Zefan Li" <lizefan.x@bytedance.com>,
"Dave Airlie" <airlied@redhat.com>,
"Daniel Vetter" <daniel.vetter@ffwll.ch>,
"Rob Clark" <robdclark@chromium.org>,
"Stéphane Marchesin" <marcheu@chromium.org>,
"T . J . Mercier" <tjmercier@google.com>,
Kenny.Ho@amd.com, "Christian König" <christian.koenig@amd.com>,
"Brian Welty" <brian.welty@intel.com>,
"Tvrtko Ursulin" <tvrtko.ursulin@intel.com>,
"Eero Tamminen" <eero.t.tamminen@intel.com>
Subject: Re: [PATCH 16/17] cgroup/drm: Expose memory stats
Date: Thu, 27 Jul 2023 15:42:58 +0200 [thread overview]
Message-ID: <05178cf3-df1c-80a7-12ad-816fafbc2df7@linux.intel.com> (raw)
In-Reply-To: <ZMF3rLioJK9QJ0yj@slm.duckdns.org>
Hey,
On 2023-07-26 21:44, Tejun Heo wrote:
> Hello,
>
> On Wed, Jul 26, 2023 at 12:14:24PM +0200, Maarten Lankhorst wrote:
>>> So, yeah, if you want to add memory controls, we better think through how
>>> the fd ownership migration should work.
>>
>> I've taken a look at the series, since I have been working on cgroup memory
>> eviction.
>>
>> The scheduling stuff will work for i915, since it has a purely software
>> execlist scheduler, but I don't think it will work for GuC (firmware)
>> scheduling or other drivers that use the generic drm scheduler.
>>
>> For something like this, you would probably want it to work inside the drm
>> scheduler first. Presumably, this can be done by setting a weight on each
>> runqueue, and perhaps adding a callback to update one for a running queue.
>> Calculating the weights hierarchically might be fun..
>
> I don't have any idea on this front. The basic idea of making high level
> distribution decisions in core code and letting individual drivers enforce
> that in a way which fits them the best makes sense to me but I don't know
> enough to have an opinion here.
>
>> I have taken a look at how the rest of cgroup controllers change ownership
>> when moved to a different cgroup, and the answer was: not at all. If we
>
> For persistent resources, that's the general rule. Whoever instantiates a
> resource gets to own it until the resource gets freed. There is an exception
> with the pid controller and there are discussions around whether we want
> some sort of migration behavior with memcg but yes by and large instantiator
> being the owner is the general model cgroup follows.
>
>> attempt to create the scheduler controls only on the first time the fd is
>> used, you could probably get rid of all the tracking.
>> This can be done very easily with the drm scheduler.
>>
>> WRT memory, I think the consensus is to track system memory like normal
>> memory. Stolen memory doesn't need to be tracked. It's kernel only memory,
>> used for internal bookkeeping only.
>>
>> The only time userspace can directly manipulate stolen memory, is by mapping
>> the pinned initial framebuffer to its own address space. The only allocation
>> it can do is when a framebuffer is displayed, and framebuffer compression
>> creates some stolen memory. Userspace is not
>> aware of this though, and has no way to manipulate those contents.
>
> So, my dumb understanding:
>
> * Ownership of an fd can be established on the first ioctl call and doesn't
> need to be migrated afterwards. There are no persistent resources to
> migration on the first call.
>
> * Memory then can be tracked in a similar way to memcg. Memory gets charged
> to the initial instantiator and doesn't need to be moved around
> afterwards. There may be some discrepancies around stolen memory but the
> magnitude of inaccuracy introduced that way is limited and bound and can
> be safely ignored.
>
> Is that correct?
Hey,
Yeah mostly, I think we can stop tracking stolen memory. I stopped doing
that for Xe, there is literally nothing to control for userspace in there.
Cheers,
Maarten
next prev parent reply other threads:[~2023-07-27 13:43 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-12 11:45 [RFC v5 00/17] DRM cgroup controller with scheduling control and memory stats Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 01/17] drm/i915: Add ability for tracking buffer objects per client Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 02/17] drm/i915: Record which client owns a VM Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 03/17] drm/i915: Track page table backing store usage Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 04/17] drm/i915: Account ring buffer and context state storage Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 05/17] drm/i915: Implement fdinfo memory stats printing Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 06/17] drm: Update file owner during use Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 07/17] cgroup: Add the DRM cgroup controller Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 08/17] drm/cgroup: Track DRM clients per cgroup Tvrtko Ursulin
2023-07-21 22:14 ` Tejun Heo
2023-07-12 11:45 ` [PATCH 09/17] drm/cgroup: Add ability to query drm cgroup GPU time Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 10/17] drm/cgroup: Add over budget signalling callback Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 11/17] drm/cgroup: Only track clients which are providing drm_cgroup_ops Tvrtko Ursulin
2023-07-12 11:46 ` [PATCH 12/17] cgroup/drm: Introduce weight based drm cgroup control Tvrtko Ursulin
2023-07-21 22:17 ` Tejun Heo
2023-07-25 13:46 ` Tvrtko Ursulin
2023-07-12 11:46 ` [PATCH 13/17] drm/i915: Wire up with drm controller GPU time query Tvrtko Ursulin
2023-07-12 11:46 ` [PATCH 14/17] drm/i915: Implement cgroup controller over budget throttling Tvrtko Ursulin
2023-07-12 11:46 ` [PATCH 15/17] cgroup/drm: Expose GPU utilisation Tvrtko Ursulin
2023-07-21 22:19 ` Tejun Heo
2023-07-21 22:20 ` Tejun Heo
2023-07-25 14:08 ` Tvrtko Ursulin
2023-07-25 21:44 ` Tejun Heo
2023-07-12 11:46 ` [PATCH 16/17] cgroup/drm: Expose memory stats Tvrtko Ursulin
2023-07-21 22:21 ` Tejun Heo
2023-07-26 10:14 ` Maarten Lankhorst
2023-07-26 11:41 ` Tvrtko Ursulin
2023-07-27 11:54 ` Maarten Lankhorst
2023-07-27 17:08 ` Tvrtko Ursulin
2023-07-28 14:15 ` Tvrtko Ursulin
2023-07-26 19:44 ` Tejun Heo
2023-07-27 13:42 ` Maarten Lankhorst [this message]
2023-07-27 16:43 ` Tvrtko Ursulin
2023-07-26 16:44 ` Tvrtko Ursulin
2023-07-26 19:49 ` Tejun Heo
2023-07-12 11:46 ` [PATCH 17/17] drm/i915: Wire up to the drm cgroup " Tvrtko Ursulin
2023-07-19 20:31 ` [RFC v5 00/17] DRM cgroup controller with scheduling control and " T.J. Mercier
2023-07-20 10:55 ` Tvrtko Ursulin
2023-07-20 17:22 ` T.J. Mercier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=05178cf3-df1c-80a7-12ad-816fafbc2df7@linux.intel.com \
--to=maarten.lankhorst@linux.intel.com \
--cc=Intel-gfx@lists.freedesktop.org \
--cc=Kenny.Ho@amd.com \
--cc=airlied@redhat.com \
--cc=brian.welty@intel.com \
--cc=cgroups@vger.kernel.org \
--cc=christian.koenig@amd.com \
--cc=daniel.vetter@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=eero.t.tamminen@intel.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lizefan.x@bytedance.com \
--cc=marcheu@chromium.org \
--cc=robdclark@chromium.org \
--cc=tj@kernel.org \
--cc=tjmercier@google.com \
--cc=tvrtko.ursulin@intel.com \
--cc=tvrtko.ursulin@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox