From: xiujianfeng <xiujianfeng@huawei.com>
To: "Michal Koutný" <mkoutny@suse.com>,
cgroups@vger.kernel.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org
Cc: Tejun Heo <tj@kernel.org>, Zefan Li <lizefan.x@bytedance.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Jonathan Corbet <corbet@lwn.net>, Shuah Khan <shuah@kernel.org>,
Muhammad Usama Anjum <usama.anjum@collabora.com>
Subject: Re: [PATCH v5 2/5] cgroup/pids: Make event counters hierarchical
Date: Tue, 16 Jul 2024 11:27:39 +0800 [thread overview]
Message-ID: <cb0efc16-6df2-72b7-47ea-ce524d428cc1@huawei.com> (raw)
In-Reply-To: <f124ce60-196e-2392-c4a9-11cdcacf9927@huawei.com>
Hi,
Friendly ping, more comment as below.
On 2024/7/3 14:59, xiujianfeng wrote:
>
>
> On 2024/5/21 17:21, Michal Koutný wrote:
>> The pids.events file should honor the hierarchy, so make the events
>> propagate from their origin up to the root on the unified hierarchy. The
>> legacy behavior remains non-hierarchical.
>>
>> Signed-off-by: Michal Koutný <mkoutny@suse.com>
>> --
> [...]
>> diff --git a/kernel/cgroup/pids.c b/kernel/cgroup/pids.c
>> index a557f5c8300b..c09b744d548c 100644
>> --- a/kernel/cgroup/pids.c
>> +++ b/kernel/cgroup/pids.c
>> @@ -238,6 +238,34 @@ static void pids_cancel_attach(struct cgroup_taskset *tset)
>> }
>> }
>>
>> +static void pids_event(struct pids_cgroup *pids_forking,
>> + struct pids_cgroup *pids_over_limit)
>> +{
>> + struct pids_cgroup *p = pids_forking;
>> + bool limit = false;
>> +
>> + for (; parent_pids(p); p = parent_pids(p)) {
>> + /* Only log the first time limit is hit. */
>> + if (atomic64_inc_return(&p->events[PIDCG_FORKFAIL]) == 1) {
>> + pr_info("cgroup: fork rejected by pids controller in ");
>> + pr_cont_cgroup_path(p->css.cgroup);
>> + pr_cont("\n");
>> + }
>> + cgroup_file_notify(&p->events_file);
>> +
>> + if (!cgroup_subsys_on_dfl(pids_cgrp_subsys) ||
>> + cgrp_dfl_root.flags & CGRP_ROOT_PIDS_LOCAL_EVENTS)
>> + break;
>> +
>> + if (p == pids_over_limit)
>> + limit = true;
>> + if (limit)
>> + atomic64_inc(&p->events[PIDCG_MAX]);
>> +
>> + cgroup_file_notify(&p->events_file);
>
> Hi Michal,
>
> I have doubts about this code. To better illustrate the problem, I am
> posting the final code here.
>
> static void pids_event(struct pids_cgroup *pids_forking,
> struct pids_cgroup *pids_over_limit)
> {
> ...
> cgroup_file_notify(&p->events_local_file);
> if (!cgroup_subsys_on_dfl(pids_cgrp_subsys) ||
> cgrp_dfl_root.flags & CGRP_ROOT_PIDS_LOCAL_EVENTS)
> return;
>
> for (; parent_pids(p); p = parent_pids(p)) {
> if (p == pids_over_limit) {
> limit = true;
> atomic64_inc(&p->events_local[PIDCG_MAX]);
> cgroup_file_notify(&p->events_local_file);
> }
> if (limit)
> atomic64_inc(&p->events[PIDCG_MAX]);
>
> cgroup_file_notify(&p->events_file);
> }
> }
>
> Consider this scenario: there are 4 groups A, B, C,and D. The
> relationships are as follows, the latter is the child of the former:
>
> root->A->B->C->D
>
> Then the user is polling on C.pids.events. When a process in D forks and
> fails due to B.max restrictions(pids_forking is D, and pids_over_limit
> is B), the user is awakened. However, when the user reads C.pids.events,
> he will find that the content has not changed. because the 'limit' is
> set to true started from B, and C.pids.events shows as below:
>
> seq_printf(sf, "max %lld\n", (s64)atomic64_read(&events[PIDCG_MAX]));
>
> Wouldn't this behavior confuse the user? Should the code to be changed
> to this?
>
> if (limit) {
> atomic64_inc(&p->events[PIDCG_MAX]);
> cgroup_file_notify(&p->events_file);
> }
>
or should the for loop be changed to the following?
atomic64_inc(&pids_over_limit->events_local[PIDCG_MAX]);
cgroup_file_notify(&pids_over_limit->events_local_file);
for (p = pids_over_limit; parent_pids(p); p = parent_pids(p)) {
atomic64_inc(&pt->events[PIDCG_MAX]);
cgroup_file_notify(&p->events_file);
}
The current behaviour is quite different from other subsys, such as
memcg, that make me confused, maybe I am missing something.
it's appreciated if anyone could respond.
next prev parent reply other threads:[~2024-07-16 3:27 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-21 9:21 [PATCH v5 0/5] pids controller events rework Michal Koutný
2024-05-21 9:21 ` [PATCH v5 1/5] cgroup/pids: Separate semantics of pids.events related to pids.max Michal Koutný
2024-05-21 9:21 ` [PATCH v5 2/5] cgroup/pids: Make event counters hierarchical Michal Koutný
2024-07-03 6:59 ` xiujianfeng
2024-07-16 3:27 ` xiujianfeng [this message]
2024-07-25 9:38 ` Michal Koutný
2024-07-30 3:21 ` Xiu Jianfeng
2024-05-21 9:21 ` [PATCH v5 3/5] cgroup/pids: Add pids.events.local Michal Koutný
2024-05-21 9:21 ` [PATCH v5 4/5] selftests: cgroup: Lexicographic order in Makefile Michal Koutný
2024-05-21 9:21 ` [PATCH v5 5/5] selftests: cgroup: Add basic tests for pids controller Michal Koutný
2024-05-26 18:47 ` [PATCH v5 0/5] pids controller events rework Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cb0efc16-6df2-72b7-47ea-ce524d428cc1@huawei.com \
--to=xiujianfeng@huawei.com \
--cc=cgroups@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=hannes@cmpxchg.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=lizefan.x@bytedance.com \
--cc=mkoutny@suse.com \
--cc=shuah@kernel.org \
--cc=tj@kernel.org \
--cc=usama.anjum@collabora.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox