* [PATCH v3] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug
@ 2024-12-10 10:23 Vishal Chourasia
2024-12-10 11:31 ` Zhang Qiao
2024-12-10 14:43 ` Peter Zijlstra
0 siblings, 2 replies; 4+ messages in thread
From: Vishal Chourasia @ 2024-12-10 10:23 UTC (permalink / raw)
To: linux-kernel
Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
rostedt, bsegall, mgorman, vschneid, sshegde, srikar, vineethr,
zhangqiao22, Vishal Chourasia
CPU controller limits are not properly enforced during CPU hotplug
operations, particularly during CPU offline. When a CPU goes offline,
throttled processes are unintentionally being unthrottled across all CPUs
in the system, allowing them to exceed their assigned quota limits.
Consider below for an example,
Assigning 6.25% bandwidth limit to a cgroup
in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
100% CPU utilization, expected (user+sys) time = 10 seconds.
$ cat /sys/fs/cgroup/test/cpu.max
50000 100000
$ ./ebizzy -t 8 -S 20 // non-hotplug case
real 20.00 s
user 10.81 s // intended behaviour
sys 0.00 s
$ ./ebizzy -t 8 -S 20 // hotplug case
real 20.00 s
user 14.43 s // Workload is able to run for 14 secs
sys 0.00 s // when it should have only run for 10 secs
During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
is called for every active CPU to update the root domain. That ends up
calling rq_offline_fair which un-throttles any throttled hierarchies.
Unthrottling should only occur for the CPU being hotplugged to allow its
throttled processes to become runnable and get migrated to other CPUs.
With current patch applied,
$ ./ebizzy -t 8 -S 20 // hotplug case
real 21.00 s
user 10.16 s // intended behaviour
sys 0.00 s
Note: hotplug operation (online, offline) was performed in while(1) loop
Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
v2: https://lore.kernel.org/all/20241207052730.1746380-2-vishalc@linux.ibm.com
v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
---
kernel/sched/fair.c | 35 ++++++++++++++++++++---------------
1 file changed, 20 insertions(+), 15 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aa0238ee4857..2faf7dff2bc8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6687,25 +6687,30 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
rq_clock_start_loop_update(rq);
rcu_read_lock();
- list_for_each_entry_rcu(tg, &task_groups, list) {
- struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
+ /* Traverse the thread group list only for inactive rq */
+ if (!cpumask_test_cpu(cpu_of(rq), cpu_active_mask)) {
+ list_for_each_entry_rcu(tg, &task_groups, list) {
+ struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
- if (!cfs_rq->runtime_enabled)
- continue;
+ if (!cfs_rq->runtime_enabled)
+ continue;
- /*
- * clock_task is not advancing so we just need to make sure
- * there's some valid quota amount
- */
- cfs_rq->runtime_remaining = 1;
- /*
- * Offline rq is schedulable till CPU is completely disabled
- * in take_cpu_down(), so we prevent new cfs throttling here.
- */
- cfs_rq->runtime_enabled = 0;
+ /*
+ * Offline rq is schedulable till CPU is completely disabled
+ * in take_cpu_down(), so we prevent new cfs throttling here.
+ */
+ cfs_rq->runtime_enabled = 0;
- if (cfs_rq_throttled(cfs_rq))
+ if (!cfs_rq_throttled(cfs_rq))
+ continue;
+
+ /*
+ * clock_task is not advancing so we just need to make sure
+ * there's some valid quota amount
+ */
+ cfs_rq->runtime_remaining = 1;
unthrottle_cfs_rq(cfs_rq);
+ }
}
rcu_read_unlock();
--
2.47.0
^ permalink raw reply related [flat|nested] 4+ messages in thread* Re: [PATCH v3] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug
2024-12-10 10:23 [PATCH v3] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug Vishal Chourasia
@ 2024-12-10 11:31 ` Zhang Qiao
2024-12-10 14:43 ` Peter Zijlstra
1 sibling, 0 replies; 4+ messages in thread
From: Zhang Qiao @ 2024-12-10 11:31 UTC (permalink / raw)
To: Vishal Chourasia, linux-kernel
Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
rostedt, bsegall, mgorman, vschneid, sshegde, srikar, vineethr
在 2024/12/10 18:23, Vishal Chourasia 写道:
> CPU controller limits are not properly enforced during CPU hotplug
> operations, particularly during CPU offline. When a CPU goes offline,
> throttled processes are unintentionally being unthrottled across all CPUs
> in the system, allowing them to exceed their assigned quota limits.
>
> Consider below for an example,
>
> Assigning 6.25% bandwidth limit to a cgroup
> in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
> 100% CPU utilization, expected (user+sys) time = 10 seconds.
>
> $ cat /sys/fs/cgroup/test/cpu.max
> 50000 100000
>
> $ ./ebizzy -t 8 -S 20 // non-hotplug case
> real 20.00 s
> user 10.81 s // intended behaviour
> sys 0.00 s
>
> $ ./ebizzy -t 8 -S 20 // hotplug case
> real 20.00 s
> user 14.43 s // Workload is able to run for 14 secs
> sys 0.00 s // when it should have only run for 10 secs
>
> During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
> is called for every active CPU to update the root domain. That ends up
> calling rq_offline_fair which un-throttles any throttled hierarchies.
>
> Unthrottling should only occur for the CPU being hotplugged to allow its
> throttled processes to become runnable and get migrated to other CPUs.
>
> With current patch applied,
> $ ./ebizzy -t 8 -S 20 // hotplug case
> real 21.00 s
> user 10.16 s // intended behaviour
> sys 0.00 s
>
Could add a description of another issue[1] here?
This bug also has another symptom, when a CPU goes offline, the cfs_rq
is not in throttled state and the runtime_remaining still had plenty
remaining, but it was reset to 1 here, causing the runtime_remaining of
cfs_rq to be quickly depleted and the actual running time slice is
smaller than the configured quota limits.
[1]https://lore.kernel.org/all/fb488379-3965-496b-8c6f-259981f3d7e5@huawei.com/
> Note: hotplug operation (online, offline) was performed in while(1) loop
> > Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
> Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
Suggested-by: Zhang Qiao <zhangqiao22@huawei.com>
--
thanks,
Zhang Qiao.
>
> v2: https://lore.kernel.org/all/20241207052730.1746380-2-vishalc@linux.ibm.com
> v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
> ---
> kernel/sched/fair.c | 35 ++++++++++++++++++++---------------
> 1 file changed, 20 insertions(+), 15 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index aa0238ee4857..2faf7dff2bc8 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6687,25 +6687,30 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
> rq_clock_start_loop_update(rq);
>
> rcu_read_lock();
> - list_for_each_entry_rcu(tg, &task_groups, list) {
> - struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> + /* Traverse the thread group list only for inactive rq */
> + if (!cpumask_test_cpu(cpu_of(rq), cpu_active_mask)) {
> + list_for_each_entry_rcu(tg, &task_groups, list) {
> + struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>
> - if (!cfs_rq->runtime_enabled)
> - continue;
> + if (!cfs_rq->runtime_enabled)
> + continue;
>
> - /*
> - * clock_task is not advancing so we just need to make sure
> - * there's some valid quota amount
> - */
> - cfs_rq->runtime_remaining = 1;
> - /*
> - * Offline rq is schedulable till CPU is completely disabled
> - * in take_cpu_down(), so we prevent new cfs throttling here.
> - */
> - cfs_rq->runtime_enabled = 0;
> + /*
> + * Offline rq is schedulable till CPU is completely disabled
> + * in take_cpu_down(), so we prevent new cfs throttling here.
> + */
> + cfs_rq->runtime_enabled = 0;
>
> - if (cfs_rq_throttled(cfs_rq))
> + if (!cfs_rq_throttled(cfs_rq))
> + continue;
> +
> + /*
> + * clock_task is not advancing so we just need to make sure
> + * there's some valid quota amount
> + */
> + cfs_rq->runtime_remaining = 1;
> unthrottle_cfs_rq(cfs_rq);
> + }
> }
> rcu_read_unlock();
>
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH v3] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug
2024-12-10 10:23 [PATCH v3] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug Vishal Chourasia
2024-12-10 11:31 ` Zhang Qiao
@ 2024-12-10 14:43 ` Peter Zijlstra
2024-12-10 15:38 ` Vishal Chourasia
1 sibling, 1 reply; 4+ messages in thread
From: Peter Zijlstra @ 2024-12-10 14:43 UTC (permalink / raw)
To: Vishal Chourasia
Cc: linux-kernel, mingo, juri.lelli, vincent.guittot,
dietmar.eggemann, rostedt, bsegall, mgorman, vschneid, sshegde,
srikar, vineethr, zhangqiao22
On Tue, Dec 10, 2024 at 03:53:47PM +0530, Vishal Chourasia wrote:
> CPU controller limits are not properly enforced during CPU hotplug
> operations, particularly during CPU offline. When a CPU goes offline,
> throttled processes are unintentionally being unthrottled across all CPUs
> in the system, allowing them to exceed their assigned quota limits.
>
> Consider below for an example,
>
> Assigning 6.25% bandwidth limit to a cgroup
> in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
> 100% CPU utilization, expected (user+sys) time = 10 seconds.
>
> $ cat /sys/fs/cgroup/test/cpu.max
> 50000 100000
>
> $ ./ebizzy -t 8 -S 20 // non-hotplug case
> real 20.00 s
> user 10.81 s // intended behaviour
> sys 0.00 s
>
> $ ./ebizzy -t 8 -S 20 // hotplug case
> real 20.00 s
> user 14.43 s // Workload is able to run for 14 secs
> sys 0.00 s // when it should have only run for 10 secs
>
> During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
> is called for every active CPU to update the root domain. That ends up
> calling rq_offline_fair which un-throttles any throttled hierarchies.
>
> Unthrottling should only occur for the CPU being hotplugged to allow its
> throttled processes to become runnable and get migrated to other CPUs.
>
> With current patch applied,
> $ ./ebizzy -t 8 -S 20 // hotplug case
> real 21.00 s
> user 10.16 s // intended behaviour
> sys 0.00 s
>
> Note: hotplug operation (online, offline) was performed in while(1) loop
>
> Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
> Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
Did you mean this?
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2c4ebfc82917..b6afb8337e73 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6696,6 +6696,9 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
lockdep_assert_rq_held(rq);
+ if (cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
+ return;
+
/*
* The rq clock has already been updated in the
* set_rq_offline(), so we should skip updating
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v3] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug
2024-12-10 14:43 ` Peter Zijlstra
@ 2024-12-10 15:38 ` Vishal Chourasia
0 siblings, 0 replies; 4+ messages in thread
From: Vishal Chourasia @ 2024-12-10 15:38 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, mingo, juri.lelli, vincent.guittot,
dietmar.eggemann, rostedt, bsegall, mgorman, vschneid, sshegde,
srikar, vineethr, zhangqiao22
On Tue, Dec 10, 2024 at 03:43:07PM +0100, Peter Zijlstra wrote:
> On Tue, Dec 10, 2024 at 03:53:47PM +0530, Vishal Chourasia wrote:
> > CPU controller limits are not properly enforced during CPU hotplug
> > operations, particularly during CPU offline. When a CPU goes offline,
> > throttled processes are unintentionally being unthrottled across all CPUs
> > in the system, allowing them to exceed their assigned quota limits.
> >
> > Consider below for an example,
> >
> > Assigning 6.25% bandwidth limit to a cgroup
> > in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
> > 100% CPU utilization, expected (user+sys) time = 10 seconds.
> >
> > $ cat /sys/fs/cgroup/test/cpu.max
> > 50000 100000
> >
> > $ ./ebizzy -t 8 -S 20 // non-hotplug case
> > real 20.00 s
> > user 10.81 s // intended behaviour
> > sys 0.00 s
> >
> > $ ./ebizzy -t 8 -S 20 // hotplug case
> > real 20.00 s
> > user 14.43 s // Workload is able to run for 14 secs
> > sys 0.00 s // when it should have only run for 10 secs
> >
> > During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
> > is called for every active CPU to update the root domain. That ends up
> > calling rq_offline_fair which un-throttles any throttled hierarchies.
> >
> > Unthrottling should only occur for the CPU being hotplugged to allow its
> > throttled processes to become runnable and get migrated to other CPUs.
> >
> > With current patch applied,
> > $ ./ebizzy -t 8 -S 20 // hotplug case
> > real 21.00 s
> > user 10.16 s // intended behaviour
> > sys 0.00 s
> >
> > Note: hotplug operation (online, offline) was performed in while(1) loop
> >
> > Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com>
> > Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
>
> Did you mean this?
Yes, essentially this.
I will post another version.
>··
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2c4ebfc82917..b6afb8337e73 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6696,6 +6696,9 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
>
> lockdep_assert_rq_held(rq);
>
> + if (cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
> + return;
> +
> /*
> * The rq clock has already been updated in the
> * set_rq_offline(), so we should skip updating
What should be done for the case when the hotplugged CPU's cfs_rq has
plenty of runtime_remaining?
I have three choices
1) set it to 1 (no change required in current code)
2) skip reset, runtime_remaining will not be touched (similar to current patch)
3) return excess runtime to the global runtime (will require taking lock)
Thanks
- vishalc
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2024-12-10 15:38 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-10 10:23 [PATCH v3] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug Vishal Chourasia
2024-12-10 11:31 ` Zhang Qiao
2024-12-10 14:43 ` Peter Zijlstra
2024-12-10 15:38 ` Vishal Chourasia
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox