public inbox for cgroups@vger.kernel.org
 help / color / mirror / Atom feed
From: Valentin Schneider <valentin.schneider@arm.com>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
	lizefan@huawei.com, tj@kernel.org, hannes@cmpxchg.org,
	mingo@kernel.org, peterz@infradead.org,
	vincent.guittot@linaro.org, Dietmar.Eggemann@arm.com,
	morten.rasmussen@arm.com, qperret@google.com
Subject: Re: [PATCH v2] sched/topology, cpuset: Account for housekeeping CPUs to avoid empty cpumasks
Date: Fri, 15 Nov 2019 18:48:43 +0000	[thread overview]
Message-ID: <c425c5cb-ba8a-e5f6-d91c-5479779cfb7a@arm.com> (raw)
In-Reply-To: <20191115171807.GH19372@blackbody.suse.cz>

On 15/11/2019 17:18, Michal Koutný wrote:
> Hello.
> 
> On Thu, Nov 14, 2019 at 04:03:50PM +0000, Valentin Schneider <valentin.schneider@arm.com> wrote:
>> Michal, could I nag you for a reviewed-by? I'd feel a bit more confident
>> with any sort of approval from folks who actually do use cpusets.
> TL;DR I played with the v5.4-rc6 _without_ this fixup and I conclude it
> unnecessary (IOW my previous theoretical observation was wrong).
> 

Thanks for going through the trouble of testing the thing.

> 
> The original problem is non-issue with v2 cpuset controller, because
> effective_cpus are never empty. isolcpus doesn't take out cpuset CPUs,
> hotplug does. In the case, no online CPU remains in the cpuset, it
> inherits ancestor's non-empty cpuset.
> 

But we still take out the isolcpus from the domain span before handing it
over to the scheduler:

	cpumask_or(dp, dp, b->effective_cpus);                               
	cpumask_and(dp, dp, housekeeping_cpumask(HK_FLAG_DOMAIN));

But...

> I reproduced the problem with v1 (before your fix). However, in v1
> effective == allowed (we're destructive and overwrite allowed on
> hotunplug) and we already check the emptiness of 
> 
>   cpumask_intersects(cp->cpus_allowed, housekeeping_cpumask(HK_FLAG_DOMAIN)
> 
> few lines higher. I.e. the fixup adds redundant check against the empty
> sched domain production.
> 

...You're right, I've been misreading that as a '!is_sched_load_balance()'
condition ever since. Duh. So this condition will always catch cpusets than
only span outside the housekeeping domain, and my previous fixup will
catch newly-empty cpusets (due to HP). Perhaps it would've been cleaner to
merge the two, but as things stand this patch isn't needed (as you say).


I tried this out to really be sure (8 CPU SMP aarch64 qemu target):

  cd /sys/fs/cgroup/cpuset                                                                             
                                                                                                     
  mkdir cs1                                                                                            
  echo 1 > cs1/cpuset.cpu_exclusive                                                                    
  echo 0 > cs1/cpuset.mems                                                                             
  echo 0-4 > cs1/cpuset.cpus                                                                           
                                                                                                     
  mkdir cs2                                                                                            
  echo 1 > cs2/cpuset.cpu_exclusive                                                                    
  echo 0 > cs2/cpuset.mems                                                                             
  echo 5-7 > cs2/cpuset.cpus                                                                           
                                                                                                     
  echo 0 > cpuset.sched_load_balance

booted with
  
  isolcpus=6-7

It seems that creating a cpuset with CPUs only outside the housekeeping
domain is forbidden, so I'm creating cs2 with *one* CPU in the domain. When
I hotplug it out, nothing dies horribly:

  echo 0 > /sys/devices/system/cpu/cpu5/online
  [   24.688145] CPU5: shutdown
  [   24.689438] psci: CPU5 killed.
  [   24.714168] allowed=0-4 effective=0-4 housekeeping=0-5
  [   24.714642] allowed=6-7 effective=6-7 housekeeping=0-5
  [   24.715416] CPU5 attaching NULL sched-domain.

> Sorry for the noise and HTH,

Sure does, thanks!

> Michal
> 

      reply	other threads:[~2019-11-15 18:48 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-04  0:39 [PATCH v2] sched/topology, cpuset: Account for housekeeping CPUs to avoid empty cpumasks Valentin Schneider
2019-11-04  0:47 ` Valentin Schneider
2019-11-14 16:03 ` Valentin Schneider
2019-11-15 17:18   ` Michal Koutný
2019-11-15 18:48     ` Valentin Schneider [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c425c5cb-ba8a-e5f6-d91c-5479779cfb7a@arm.com \
    --to=valentin.schneider@arm.com \
    --cc=Dietmar.Eggemann@arm.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=mingo@kernel.org \
    --cc=mkoutny@suse.com \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=qperret@google.com \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox