public inbox for cgroups@vger.kernel.org
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Chengming Zhou <zhouchengming@bytedance.com>
Cc: Tejun Heo <tj@kernel.org>,
	corbet@lwn.net, surenb@google.com, mingo@redhat.com,
	peterz@infradead.org, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, rostedt@goodmis.org,
	bsegall@google.com, cgroups@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	songmuchun@bytedance.com
Subject: Re: [PATCH v2 09/10] sched/psi: per-cgroup PSI stats disable/re-enable interface
Date: Wed, 10 Aug 2022 11:25:07 -0400	[thread overview]
Message-ID: <YvPN07UlaPFAdlet@cmpxchg.org> (raw)
In-Reply-To: <b89155d3-9315-fefc-408b-4cf538360a1c@bytedance.com>

On Wed, Aug 10, 2022 at 09:30:59AM +0800, Chengming Zhou wrote:
> On 2022/8/10 08:39, Chengming Zhou wrote:
> > On 2022/8/10 01:48, Tejun Heo wrote:
> >> Hello,
> >>
> >> On Mon, Aug 08, 2022 at 07:03:40PM +0800, Chengming Zhou wrote:
> >>> So this patch introduce a per-cgroup PSI stats disable/re-enable
> >>> interface "cgroup.psi", which is a read-write single value file that
> >>> allowed values are "0" and "1", the defaults is "1" so per-cgroup
> >>> PSI stats is enabled by default.
> >>
> >> Given that the knobs are named {cpu|memory|io}.pressure, I wonder whether
> >> "cgroup.psi" is the best name. Also, it doesn't convey that it's the
> >> enable/disable knob. I think it needs a better name.
> > 
> > Yes, "cgroup.psi" is not good. What abort "pressure.enable" or "cgroup.psi_enable"?
> 
> Doesn't look good either, what do you think of "cgroup.pressure.enable"?

How about just cgroup.pressure? Too ambiguous?

cgroup.pressure.enable sounds good to me too. Or, because it's
default-enabled and that likely won't change, cgroup.pressure.disable.

  reply	other threads:[~2022-08-10 15:25 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-08 11:03 [PATCH v2 00/10] sched/psi: some optimization and extension Chengming Zhou
2022-08-08 11:03 ` [PATCH v2 01/10] sched/psi: fix periodic aggregation shut off Chengming Zhou
2022-08-08 11:03 ` [PATCH v2 03/10] sched/psi: move private helpers to sched/stats.h Chengming Zhou
2022-08-08 11:03 ` [PATCH v2 04/10] sched/psi: don't change task psi_flags when migrate CPU/group Chengming Zhou
2022-08-08 11:03 ` [PATCH v2 05/10] sched/psi: don't create cgroup PSI files when psi_disabled Chengming Zhou
2022-08-08 11:03 ` [PATCH v2 06/10] sched/psi: save percpu memory when !psi_cgroups_enabled Chengming Zhou
2022-08-08 11:03 ` [PATCH v2 07/10] sched/psi: remove NR_ONCPU task accounting Chengming Zhou
2022-08-16 10:40   ` Chengming Zhou
2022-08-08 11:03 ` [PATCH v2 08/10] sched/psi: add PSI_IRQ to track IRQ/SOFTIRQ pressure Chengming Zhou
2022-08-08 11:03 ` [PATCH v2 09/10] sched/psi: per-cgroup PSI stats disable/re-enable interface Chengming Zhou
     [not found]   ` <20220808110341.15799-10-zhouchengming-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
2022-08-09 17:48     ` Tejun Heo
2022-08-10  0:39       ` Chengming Zhou
     [not found]         ` <fcd0bd39-3049-a279-23e6-a6c02b4680a7-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
2022-08-10  1:30           ` Chengming Zhou
2022-08-10 15:25             ` Johannes Weiner [this message]
2022-08-10 17:27               ` Tejun Heo
2022-08-11  2:09                 ` Chengming Zhou
     [not found]               ` <YvPN07UlaPFAdlet-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2022-08-15 13:23                 ` Michal Koutný
     [not found]                   ` <20220815132343.GA22640-9OudH3eul5jcvrawFnH+a6VXKuFTiq87@public.gmane.org>
2022-08-23  6:18                     ` Chengming Zhou
2022-08-23 15:35                       ` Johannes Weiner
2022-08-23 15:43                         ` Chengming Zhou
     [not found]                         ` <YwTz32VWuZeLHOHe-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2022-08-23 16:20                           ` Tejun Heo
2022-08-12 10:14     ` Michal Koutný
2022-08-12 12:36       ` Chengming Zhou
2022-08-15 13:23         ` Michal Koutný
2022-08-15 15:49   ` Johannes Weiner
     [not found]     ` <YvprI6ZL8dVWGyBO-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2022-08-15 19:50       ` Tejun Heo
2022-08-16 13:06       ` Chengming Zhou
2022-08-08 11:03 ` [PATCH v2 10/10] sched/psi: cache parent psi_group to speed up groups iterate Chengming Zhou
     [not found] ` <20220808110341.15799-1-zhouchengming-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
2022-08-08 11:03   ` [PATCH v2 02/10] sched/psi: optimize task switch inside shared cgroups again Chengming Zhou
2022-08-15 13:25   ` [PATCH v2 00/10] sched/psi: some optimization and extension Michal Koutný
2022-08-16 14:01     ` Chengming Zhou
2022-08-17 15:19       ` Chengming Zhou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YvPN07UlaPFAdlet@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=bsegall@google.com \
    --cc=cgroups@vger.kernel.org \
    --cc=corbet@lwn.net \
    --cc=dietmar.eggemann@arm.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=songmuchun@bytedance.com \
    --cc=surenb@google.com \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=zhouchengming@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox