public inbox for cgroups@vger.kernel.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
	Tejun Heo <tj@kernel.org>
Cc: "Rob Clark" <robdclark@chromium.org>,
	Kenny.Ho@amd.com, "Dave Airlie" <airlied@redhat.com>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	Intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org,
	"Christian König" <christian.koenig@amd.com>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	cgroups@vger.kernel.org,
	"Eero Tamminen" <eero.t.tamminen@intel.com>,
	"T . J . Mercier" <tjmercier@google.com>
Subject: Re: [PATCH 16/17] cgroup/drm: Expose memory stats
Date: Thu, 27 Jul 2023 18:08:27 +0100	[thread overview]
Message-ID: <5d65d387-2718-06c3-ee5d-8a7da6e3ddfd@linux.intel.com> (raw)
In-Reply-To: <fb734626-6041-1e68-38d7-221837284cf1@linux.intel.com>


On 27/07/2023 12:54, Maarten Lankhorst wrote:
> Hey,
> 
> On 2023-07-26 13:41, Tvrtko Ursulin wrote:
>>
>> On 26/07/2023 11:14, Maarten Lankhorst wrote:
>>> Hey,
>>>
>>> On 2023-07-22 00:21, Tejun Heo wrote:
>>>> On Wed, Jul 12, 2023 at 12:46:04PM +0100, Tvrtko Ursulin wrote:
>>>>>    $ cat drm.memory.stat
>>>>>    card0 region=system total=12898304 shared=0 active=0 
>>>>> resident=12111872 purgeable=167936
>>>>>    card0 region=stolen-system total=0 shared=0 active=0 resident=0 
>>>>> purgeable=0
>>>>>
>>>>> Data is generated on demand for simplicty of implementation ie. no 
>>>>> running
>>>>> totals are kept or accounted during migrations and such. Various
>>>>> optimisations such as cheaper collection of data are possible but
>>>>> deliberately left out for now.
>>>>>
>>>>> Overall, the feature is deemed to be useful to container orchestration
>>>>> software (and manual management).
>>>>>
>>>>> Limits, either soft or hard, are not envisaged to be implemented on 
>>>>> top of
>>>>> this approach due on demand nature of collecting the stats.
>>>>
>>>> So, yeah, if you want to add memory controls, we better think 
>>>> through how
>>>> the fd ownership migration should work.
>>> I've taken a look at the series, since I have been working on cgroup 
>>> memory eviction.
>>>
>>> The scheduling stuff will work for i915, since it has a purely 
>>> software execlist scheduler, but I don't think it will work for GuC 
>>> (firmware) scheduling or other drivers that use the generic drm 
>>> scheduler.
>>
>> It actually works - I used to have a blurb in the cover letter about 
>> it but apparently I dropped it. Just a bit less well with many 
>> clients, since there are fewer priority levels.
>>
>> All that the design requires from the invididual drivers is some way 
>> to react to the "you are over budget by this much" signal. The rest is 
>> driver and backend specific.
> 
> What I mean is that this signal may not be applicable since the drm 
> scheduler just schedules jobs that run. Adding a weight might be done in 
> hardware, since it's responsible for  scheduling which context gets to 
> run. The over budget signal is useless in that case, and you just need 
> to set a scheduling priority for the hardware instead.

The over budget callback lets the driver know its assigned budget and 
its current utilisation. Already with that data drivers could implement 
something smarter than what I did in my RFC. So I don't think callback 
is completely useless even for some smarter implementation which 
potentially ties into firmware scheduling.

Anyway, I maintain this is implementation details.

>>> For something like this,  you would probably want it to work inside 
>>> the drm scheduler first. Presumably, this can be done by setting a 
>>> weight on each runqueue, and perhaps adding a callback to update one 
>>> for a running queue. Calculating the weights hierarchically might be 
>>> fun..
>>
>> It is not needed to work in drm scheduler first. In fact drm scheduler 
>> based drivers can plug into what I have since it already has the 
>> notion of scheduling priorities.
>>
>> They would only need to implement a hook which allow the cgroup 
>> controller to query client GPU utilisation and another to received the 
>> over budget signal.
>>
>> Amdgpu and msm AFAIK could be easy candidates because they both 
>> support per client utilisation and priorities.
>>
>> Looks like I need to put all this info back into the cover letter.
>>
>> Also, hierarchic weights and time budgets are all already there. What 
>> could be done later is make this all smarter and respect the time 
>> budget with more precision. That would however, in many cases 
>> including Intel, require co-operation with the firmware. In any case 
>> it is only work in the implementation, while the cgroup control 
>> interface remains the same.
>>
>>> I have taken a look at how the rest of cgroup controllers change 
>>> ownership when moved to a different cgroup, and the answer was: not 
>>> at all. If we attempt to create the scheduler controls only on the 
>>> first time the fd is used, you could probably get rid of all the 
>>> tracking.
>>
>> Can you send a CPU file descriptor from process A to process B and 
>> have CPU usage belonging to process B show up in process' A cgroup, or 
>> vice-versa? Nope, I am not making any sense, am I? My point being it 
>> is not like-to-like, model is different.
>>
>> No ownership transfer would mean in wide deployments all GPU 
>> utilisation would be assigned to Xorg and so there is no point to any 
>> of this. No way to throttle a cgroup with un-important GPU clients for 
>> instance.
> If you just grab the current process' cgroup when a drm_sched_entity is 
> created, you don't have everything charged to X.org. No need for 
> complicated ownership tracking in drm_file. The same equivalent should 
> be done in i915 as well when a context is created as it's not using the 
> drm scheduler.

Okay so essentially nuking the concept of DRM clients belongs to one 
cgroup and instead tracking at the context level. That is an interesting 
idea. I suspect implementation could require somewhat generalizing the 
concept of an "execution context", or at least expressing it via the DRM 
cgroup controller.

I can give this a spin, or at least some more detailed thought, once we 
close on a few more details regarding charging in general.

>>> This can be done very easily with the drm scheduler.
>>>
>>> WRT memory, I think the consensus is to track system memory like 
>>> normal memory. Stolen memory doesn't need to be tracked. It's kernel 
>>> only memory, used for internal bookkeeping  only.
>>>
>>> The only time userspace can directly manipulate stolen memory, is by 
>>> mapping the pinned initial framebuffer to its own address space. The 
>>> only allocation it can do is when a framebuffer is displayed, and 
>>> framebuffer compression creates some stolen memory. Userspace is not
>>> aware of this though, and has no way to manipulate those contents.
>>
>> Stolen memory is irrelevant and not something cgroup controller knows 
>> about. Point is drivers say which memory regions they have and their 
>> utilisation.
>>
>> Imagine instead of stolen it said vram0, or on Intel multi-tile it 
>> shows local0 and local1. People working with containers are interested 
>> to see this breakdown. I guess the parallel and use case here is 
>> closer to memory.numa_stat.
> Correct, but for the same reason, I think it might be more useful to 
> split up the weight too.
> 
> A single scheduling weight for the global GPU might be less useful than 
> per engine, or per tile perhaps..

Yeah, there is some complexity there for sure and could be a larger 
write up. In short per engine stuff tends to work out in practice as is 
given how each driver can decide upon receiving the signal what to do.

In the i915 RFC for instance if it gets "over budget" signal from the 
group, but it sees that the physical engines belonging to this specific 
GPU are not over-subscribed, it simply omits any throttling. Which in 
practice works out fine for two clients competing for different engines. 
Same would be for multiple GPUs (or tiles with our stuff) in the same 
cgroup.

Going back to the single scheduling weight or more fine grained. We 
could choose to follow for instance io.weight format? Start with 
drm.weight being "default 1000" and later extend to per card (or more):

"""
default 100
card0 20
card1 50
"""

In this case I would convert drm.weight to this format straight away for 
the next respin, just wouldn't support per card just yet.

Regards,

Tvrtko

  reply	other threads:[~2023-07-27 17:08 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-12 11:45 [RFC v5 00/17] DRM cgroup controller with scheduling control and memory stats Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 01/17] drm/i915: Add ability for tracking buffer objects per client Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 02/17] drm/i915: Record which client owns a VM Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 03/17] drm/i915: Track page table backing store usage Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 04/17] drm/i915: Account ring buffer and context state storage Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 05/17] drm/i915: Implement fdinfo memory stats printing Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 06/17] drm: Update file owner during use Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 07/17] cgroup: Add the DRM cgroup controller Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 08/17] drm/cgroup: Track DRM clients per cgroup Tvrtko Ursulin
2023-07-21 22:14   ` Tejun Heo
2023-07-12 11:45 ` [PATCH 09/17] drm/cgroup: Add ability to query drm cgroup GPU time Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 10/17] drm/cgroup: Add over budget signalling callback Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 11/17] drm/cgroup: Only track clients which are providing drm_cgroup_ops Tvrtko Ursulin
2023-07-12 11:46 ` [PATCH 12/17] cgroup/drm: Introduce weight based drm cgroup control Tvrtko Ursulin
2023-07-21 22:17   ` Tejun Heo
     [not found]     ` <ZLsEEYDFlJZwrJiV-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
2023-07-25 13:46       ` Tvrtko Ursulin
2023-07-12 11:46 ` [PATCH 13/17] drm/i915: Wire up with drm controller GPU time query Tvrtko Ursulin
2023-07-12 11:46 ` [PATCH 14/17] drm/i915: Implement cgroup controller over budget throttling Tvrtko Ursulin
2023-07-12 11:46 ` [PATCH 15/17] cgroup/drm: Expose GPU utilisation Tvrtko Ursulin
     [not found]   ` <20230712114605.519432-16-tvrtko.ursulin-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2023-07-21 22:19     ` Tejun Heo
     [not found]       ` <ZLsEdJeEAPYWFunT-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
2023-07-21 22:20         ` Tejun Heo
2023-07-25 14:08           ` Tvrtko Ursulin
     [not found]             ` <3b96cada-3433-139c-3180-1f050f0f80f3-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2023-07-25 21:44               ` Tejun Heo
2023-07-12 11:46 ` [PATCH 16/17] cgroup/drm: Expose memory stats Tvrtko Ursulin
     [not found]   ` <20230712114605.519432-17-tvrtko.ursulin-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2023-07-21 22:21     ` Tejun Heo
2023-07-26 10:14       ` Maarten Lankhorst
2023-07-26 11:41         ` Tvrtko Ursulin
     [not found]           ` <89d7181c-6830-ca6e-0c39-caa49d14d474-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2023-07-27 11:54             ` Maarten Lankhorst
2023-07-27 17:08               ` Tvrtko Ursulin [this message]
     [not found]                 ` <5d65d387-2718-06c3-ee5d-8a7da6e3ddfd-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2023-07-28 14:15                   ` Tvrtko Ursulin
     [not found]         ` <ea64d7bf-c01b-f4ad-a36b-f77e2c2ea931-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2023-07-26 19:44           ` Tejun Heo
2023-07-27 13:42             ` Maarten Lankhorst
     [not found]               ` <05178cf3-df1c-80a7-12ad-816fafbc2df7-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2023-07-27 16:43                 ` Tvrtko Ursulin
2023-07-26 16:44       ` Tvrtko Ursulin
     [not found]         ` <8959f665-4353-3630-a6c7-5dca60959faa-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2023-07-26 19:49           ` Tejun Heo
2023-07-12 11:46 ` [PATCH 17/17] drm/i915: Wire up to the drm cgroup " Tvrtko Ursulin
2023-07-19 20:31 ` [RFC v5 00/17] DRM cgroup controller with scheduling control and " T.J. Mercier
     [not found]   ` <CABdmKX1PUF+X897ZMOr0RNiYdoiL_2NkcSt+Eh55BfW-05LopQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2023-07-20 10:55     ` Tvrtko Ursulin
     [not found]       ` <95de5c1e-f03b-8fb7-b5ef-59ac7ca82f31-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
2023-07-20 17:22         ` T.J. Mercier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5d65d387-2718-06c3-ee5d-8a7da6e3ddfd@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=Kenny.Ho@amd.com \
    --cc=airlied@redhat.com \
    --cc=cgroups@vger.kernel.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=eero.t.tamminen@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan.x@bytedance.com \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=marcheu@chromium.org \
    --cc=robdclark@chromium.org \
    --cc=tj@kernel.org \
    --cc=tjmercier@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox