public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Vishal Chourasia <vishalc@linux.ibm.com>
To: Zhang Qiao <zhangqiao22@huawei.com>
Cc: linux-kernel@vger.kernel.org, mingo@redhat.com,
	peterz@infradead.org, juri.lelli@redhat.com,
	vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
	rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
	vschneid@redhat.com, sshegde@linux.ibm.com, srikar@linux.ibm.com,
	vineethr@linux.ibm.com
Subject: Re: [PATCH v2] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug
Date: Tue, 10 Dec 2024 14:58:12 +0530	[thread overview]
Message-ID: <Z1gJrJ6TyotWzoCu@linux.ibm.com> (raw)
In-Reply-To: <fb488379-3965-496b-8c6f-259981f3d7e5@huawei.com>

On Tue, Dec 10, 2024 at 02:55:36PM +0800, Zhang Qiao wrote:
> Hi Vishal,
> 
Thanks for looking into this!
> 
> 
> 在 2024/12/7 13:27, Vishal Chourasia 写道:
> > CPU controller limits are not properly enforced during CPU hotplug
> > operations, particularly during CPU offline. When a CPU goes offline,
> > throttled processes are unintentionally being unthrottled across all CPUs
> > in the system, allowing them to exceed their assigned quota limits.
> > 
> 
> I encountered a similar issue where cfs_rq is not in throttled state and the runtime_remaining still
> had plenty remaining, but it was reset to 1 here, causing the runtime_remaining of cfs_rq to be
> quickly depleted and the actual running time slice is smaller than the configured quota limits.
> 
Correct.
> > Consider below for an example,
> > 
> > Assigning 6.25% bandwidth limit to a cgroup
> > in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
> > 100% CPU utilization, expected (user+sys) time = 10 seconds.
> > 
> > $ cat /sys/fs/cgroup/test/cpu.max
> > 50000 100000
> > 
> > $ ./ebizzy -t 8 -S 20        // non-hotplug case
> > real 20.00 s
> > user 10.81 s                 // intended behaviour
> > sys   0.00 s
> > 
> > $ ./ebizzy -t 8 -S 20        // hotplug case
> > real 20.00 s
> > user 14.43 s                 // Workload is able to run for 14 secs
> > sys   0.00 s                 // when it should have only run for 10 secs
> > 
> > During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
> > is called for every active CPU to update the root domain. That ends up
> > calling rq_offline_fair which un-throttles any throttled hierarchies.
> > 
> > Unthrottling should only occur for the CPU being hotplugged to allow its
> > throttled processes to become runnable and get migrated to other CPUs.
> > 
> > With current patch applied,
> > $ ./ebizzy -t 8 -S 20        // hotplug case
> > real 21.00 s
> > user 10.16 s                 // intended behaviour
> > sys   0.00 s
> > 
> > Note: hotplug operation (online, offline) was performed in while(1) loop
> > Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
> > Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
> > 
> > v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
> > 
> > ---
> >  kernel/sched/fair.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index fbdca89c677f..e28a8e056ebf 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6684,7 +6684,8 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
> >  	list_for_each_entry_rcu(tg, &task_groups, list) {
> >  		struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> >  
> > -		if (!cfs_rq->runtime_enabled)
> > +		/* Only unthrottle the CPU being hotplugged */
> > +		if (!cfs_rq->runtime_enabled || cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
> >  			continue;
> 
> The cpu_of(rq) is  fixed value, so the ret of cpumask_test_cpu() is also a fixed value. We could
> check this before traversing the task_groups list, avoiding unnecessary traversal, is right?
Yes, I will sent out another version. Thanks for pointing it out!
> 
> Something like this:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2d16c8545c71..79e9e5323112 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6687,25 +6687,29 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
>         rq_clock_start_loop_update(rq);
> 
>         rcu_read_lock();
> -       list_for_each_entry_rcu(tg, &task_groups, list) {
> -               struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> +       if (!cpumask_test_cpu(cpu_of(rq), cpu_active_mask)) {
> +               list_for_each_entry_rcu(tg, &task_groups, list) {
> +                       struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> 
> -               if (!cfs_rq->runtime_enabled)
> -                       continue;
> +                       if (!cfs_rq->runtime_enabled)
> +                               continue;
> 
> -               /*
> -                * clock_task is not advancing so we just need to make sure
> -                * there's some valid quota amount
> -                */
> -               cfs_rq->runtime_remaining = 1;
> -               /*
> -                * Offline rq is schedulable till CPU is completely disabled
> -                * in take_cpu_down(), so we prevent new cfs throttling here.
> -                */
> -               cfs_rq->runtime_enabled = 0;
> +                       /*
> +                        * Offline rq is schedulable till CPU is completely disabled
> +                        * in take_cpu_down(), so we prevent new cfs throttling here.
> +                        */
> +                       cfs_rq->runtime_enabled = 0;
> 
> -               if (cfs_rq_throttled(cfs_rq))
> +                       if (!cfs_rq_throttled(cfs_rq))
> +                               continue;
> +
> +                       /*
> +                        * clock_task is not advancing so we just need to make sure
> +                        * there's some valid quota amount
> +                        */
> +                       cfs_rq->runtime_remaining = 1;
>                         unthrottle_cfs_rq(cfs_rq);
> +               }
>         }
Only traverse the thread group list for inactive CPUs, and if the cfs_rq
is throttled then set it's runtime_remaining to 1 and unthrottle it.

- vishalc
> 
> -- 
> Zhang Qiao
> >  
> >  		/*

      reply	other threads:[~2024-12-10  9:28 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-07  5:27 [PATCH v2] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug Vishal Chourasia
2024-12-10  6:55 ` Zhang Qiao
2024-12-10  9:28   ` Vishal Chourasia [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z1gJrJ6TyotWzoCu@linux.ibm.com \
    --to=vishalc@linux.ibm.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=srikar@linux.ibm.com \
    --cc=sshegde@linux.ibm.com \
    --cc=vincent.guittot@linaro.org \
    --cc=vineethr@linux.ibm.com \
    --cc=vschneid@redhat.com \
    --cc=zhangqiao22@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox