public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] sched/fair: allow disabling sched_balance_newidle with sched_relax_domain_level
@ 2024-03-31 16:01 Vitalii Bursov
  2024-03-31 16:01 ` [PATCH v2 1/3] " Vitalii Bursov
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Vitalii Bursov @ 2024-03-31 16:01 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, linux-kernel,
	Vitalii Bursov

Changes in v2:
- Split debug.c change in a separate commit and move new "level"
after "groups_flags"
- Added "Fixes" tag and updated commit message
- Update domain levels cgroup-v1/cpusets.rst documentation
- Link to v1: https://lore.kernel.org/all/cover.1711584739.git.vitaly@bursov.com/

During the upgrade from Linux 5.4 we found a small (around 3%) 
performance regression which was tracked to commit 
c5b0a7eefc70150caf23e37bc9d639c68c87a097

    sched/fair: Remove sysctl_sched_migration_cost condition

    With a default value of 500us, sysctl_sched_migration_cost is
    significanlty higher than the cost of load_balance. Remove the
    condition and rely on the sd->max_newidle_lb_cost to abort
    newidle_balance.

Looks like "newidle" balancing is beneficial for a lot of workloads, 
just not for this specific one. The workload is video encoding, there 
are 100s-1000s of threads, some are synchronized with mutexes and 
conditional variables. The process aims to have a portion of CPU idle, 
so no CPU cores are 100% busy. Perhaps, the performance impact we see 
comes from additional processing in the scheduler and additional cost 
like more cache misses, and not from an incorrect balancing. See
perf output below.

My understanding is that "sched_relax_domain_level" cgroup parameter 
should control if sched_balance_newidle() is called and what's the scope
of the balancing is, but it doesn't fully work for this case.

cpusets.rst documentation:
> The 'cpuset.sched_relax_domain_level' file allows you to request changing
> this searching range as you like.  This file takes int value which
> indicates size of searching range in levels ideally as follows,
> otherwise initial value -1 that indicates the cpuset has no request.
>  
> ====== ===========================================================
>   -1   no request. use system default or follow request of others.
>    0   no search.
>    1   search siblings (hyperthreads in a core).
>    2   search cores in a package.
>    3   search cpus in a node [= system wide on non-NUMA system]
>    4   search nodes in a chunk of node [on NUMA system]
>    5   search system wide [on NUMA system]
> ====== ===========================================================

Setting cpuset.sched_relax_domain_level to 0 works as 1.

On a dual-CPU server, domains and levels are as follows:
  domain 0: level 0, SMT
  domain 1: level 2, MC
  domain 2: level 5, NUMA

So, to support "0 no search", the value in 
cpuset.sched_relax_domain_level should disable SD_BALANCE_NEWIDLE for a 
specified level and keep it enabled for prior levels. For example, SMT 
level is 0, so sched_relax_domain_level=0 should exclude levels >=0.

Instead, cpuset.sched_relax_domain_level enables the specified level,
which effectively removes "no search" option. See below for domain
flags for all cpuset.sched_relax_domain_level values.

Proposed patch allows clearing SD_BALANCE_NEWIDLE flags when 
cpuset.sched_relax_domain_level is set to 0 and extends max
value validation range beyond sched_domain_level_max. This allows
setting SD_BALANCE_NEWIDLE on all levels and override platform
default if it does not include all levels.

Thanks

=========================
Perf output for a simimar workload/test case shows that newidle_balance
(now renamed to sched_balance_newidle) is called when handling futex and
nanosleep syscalls:
8.74%     0.40%  a.out    [kernel.vmlinux]    [k] entry_SYSCALL_64
8.34% entry_SYSCALL_64
 - do_syscall_64
    - 5.50% __x64_sys_futex
       - 5.42% do_futex
          - 3.79% futex_wait
             - 3.74% __futex_wait
                - 3.53% futex_wait_queue
                   - 3.45% schedule
                      - 3.43% __schedule
                         - 2.06% pick_next_task
                            - 1.93% pick_next_task_fair
                               - 1.87% newidle_balance
                                  - 1.52% load_balance
                                     - 1.16% find_busiest_group
                                        - 1.13% update_sd_lb_stats.constprop.0
                                             1.01% update_sg_lb_stats
                         - 0.83% dequeue_task_fair
                              0.66% dequeue_entity
          - 1.57% futex_wake
             - 1.22% wake_up_q
                - 1.20% try_to_wake_up
                     0.58% select_task_rq_fair
    - 2.44% __x64_sys_nanosleep
       - 2.36% hrtimer_nanosleep
          - 2.33% do_nanosleep
             - 2.05% schedule
                - 2.03% __schedule
                   - 1.23% pick_next_task
                      - 1.15% pick_next_task_fair
                         - 1.12% newidle_balance
                            - 0.90% load_balance
                               - 0.68% find_busiest_group
                                  - 0.66% update_sd_lb_stats.constprop.0
                                       0.59% update_sg_lb_stats
                     0.52% dequeue_task_fair

When newidle_balance is disabled (or when using older kernels), perf
output is:
6.37%     0.41%  a.out    [kernel.vmlinux]    [k] entry_SYSCALL_64
5.96% entry_SYSCALL_64
 - do_syscall_64
    - 3.97% __x64_sys_futex
       - 3.89% do_futex
          - 2.32% futex_wait
             - 2.27% __futex_wait
                - 2.05% futex_wait_queue
                   - 1.98% schedule
                      - 1.96% __schedule
                         - 0.81% dequeue_task_fair
                              0.66% dequeue_entity
                         - 0.64% pick_next_task
                              0.51% pick_next_task_fair
          - 1.52% futex_wake
             - 1.15% wake_up_q
                - try_to_wake_up
                     0.59% select_task_rq_fair
    - 1.58% __x64_sys_nanosleep
       - 1.52% hrtimer_nanosleep
          - 1.48% do_nanosleep
             - 1.20% schedule
                - 1.19% __schedule
                     0.53% dequeue_task_fair


Without a patch:
=========================
CPUs: 2 Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz

# uname -r
6.8.1

# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 24 25 26 27 28 29 30 31 32 33 34 35
node 0 size: 63962 MB
node 0 free: 59961 MB
node 1 cpus: 12 13 14 15 16 17 18 19 20 21 22 23 36 37 38 39 40 41 42 43 44 45 46 47
node 1 size: 64446 MB
node 1 free: 63338 MB
node distances:
node   0   1 
  0:  10  21 
  1:  21  10 

# head /proc/schedstat 
version 15
timestamp 4295347219
cpu0 0 0 0 0 0 0 3035466036 858375615 67578
domain0 0000,01000001 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0...
domain1 000f,ff000fff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0...
domain2 ffff,ffffffff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0...

# cd /sys/kernel/debug/sched/domains
# echo -1 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . cpu0/*/{name,flags,groups_flags,max_newidle_lb_cost}
cpu0/domain0/name:SMT
cpu0/domain1/name:MC
cpu0/domain2/name:NUMA

cpu0/domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_CPUCAPACITY 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING
cpu0/domain1/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain2/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SERIALIZE SD_OVERLAP 
                            SD_NUMA
cpu0/domain1/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC 
                            SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_CPUCAPACITY SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain2/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC 
                            SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING 
cpu0/domain0/max_newidle_lb_cost:2236
cpu0/domain1/max_newidle_lb_cost:3444
cpu0/domain2/max_newidle_lb_cost:4590

# echo 0 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . cpu0/*/{flags,groups_flags,max_newidle_lb_cost}
cpu0/domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_CPUCAPACITY 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING 
cpu0/domain1/flags:SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING 
cpu0/domain2/flags:SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SERIALIZE SD_OVERLAP SD_NUMA 
cpu0/domain1/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC 
                            SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_CPUCAPACITY SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain2/groups_flags:SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain0/max_newidle_lb_cost:0
cpu0/domain1/max_newidle_lb_cost:0
cpu0/domain2/max_newidle_lb_cost:0

# echo 1 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . cpu0/*/{flags,groups_flags,max_newidle_lb_cost}

cpu0/domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_CPUCAPACITY 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING 
cpu0/domain1/flags:SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING 
cpu0/domain2/flags:SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SERIALIZE SD_OVERLAP SD_NUMA 
cpu0/domain1/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC 
                            SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_CPUCAPACITY SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain2/groups_flags:SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain0/max_newidle_lb_cost:309
cpu0/domain1/max_newidle_lb_cost:0
cpu0/domain2/max_newidle_lb_cost:0

# echo 2 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . cpu0/*/{flags,groups_flags,max_newidle_lb_cost}

cpu0/domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_CPUCAPACITY 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING 
cpu0/domain1/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain2/flags:SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SERIALIZE SD_OVERLAP SD_NUMA 
cpu0/domain1/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC 
                            SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_CPUCAPACITY SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain2/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC 
                            SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING 
cpu0/domain0/max_newidle_lb_cost:276
cpu0/domain1/max_newidle_lb_cost:2776
cpu0/domain2/max_newidle_lb_cost:0

# echo 3 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . cpu0/*/{flags,groups_flags,max_newidle_lb_cost}
cpu0/domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_CPUCAPACITY 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING 
cpu0/domain1/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain2/flags:SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SERIALIZE SD_OVERLAP SD_NUMA 
cpu0/domain1/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC 
                            SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_CPUCAPACITY SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain2/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC 
                            SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING 
cpu0/domain0/max_newidle_lb_cost:289
cpu0/domain1/max_newidle_lb_cost:3192
cpu0/domain2/max_newidle_lb_cost:0

# echo 4 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . cpu0/*/{flags,groups_flags,max_newidle_lb_cost}
cpu0/domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_CPUCAPACITY 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING 
cpu0/domain1/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK 
                            SD_WAKE_AFFINE SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain2/flags:SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SERIALIZE SD_OVERLAP SD_NUMA 
cpu0/domain1/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC 
                            SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_CPUCAPACITY SD_SHARE_PKG_RESOURCES 
                            SD_PREFER_SIBLING 
cpu0/domain2/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC 
                            SD_BALANCE_FORK SD_WAKE_AFFINE 
                            SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING 
cpu0/domain0/max_newidle_lb_cost:1306
cpu0/domain1/max_newidle_lb_cost:1999
cpu0/domain2/max_newidle_lb_cost:0

# echo 5 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
bash: echo: write error: Invalid argument
=========================


The same system with the patch applied:
=========================
# cd /sys/kernel/debug/sched/domains
# echo -1 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . cpu0/*/{name,level,flags,groups_flags}
cpu0/domain0/name:SMT
cpu0/domain1/name:MC
cpu0/domain2/name:NUMA
cpu0/domain0/level:0
cpu0/domain1/level:2
cpu0/domain2/level:5
cpu0/domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain1/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain2/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain1/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain2/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...

# echo 0 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . cpu0/*/{flags,groups_flags}
cpu0/domain0/flags:SD_BALANCE_EXEC ...
cpu0/domain1/flags:SD_BALANCE_EXEC ...
cpu0/domain2/flags:SD_BALANCE_EXEC ...
cpu0/domain1/groups_flags:SD_BALANCE_EXEC ...
cpu0/domain2/groups_flags:SD_BALANCE_EXEC ...

# echo 1 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . cpu0/*/{flags,groups_flags}
cpu0/domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain1/flags:SD_BALANCE_EXEC ...
cpu0/domain2/flags:SD_BALANCE_EXEC ...
cpu0/domain1/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain2/groups_flags:SD_BALANCE_EXEC ...

[skip 2, same as 1]

# echo 3 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . cpu0/*/{flags,groups_flags}
cpu0/domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain1/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain2/flags:SD_BALANCE_EXEC ...
cpu0/domain1/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain2/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...

[skip 4 and 5, same as 3]

# echo 6 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . cpu0/*/{flags,groups_flags}
cpu0/domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain1/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain2/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain1/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...
cpu0/domain2/groups_flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC ...

# echo 7 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
bash: echo: write error: Invalid argument
=========================

Vitalii Bursov (3):
  sched/fair: allow disabling sched_balance_newidle with
    sched_relax_domain_level
  sched/debug: dump domains' level
  docs: cgroup-v1: clarify that domain levels are system-specific

 Documentation/admin-guide/cgroup-v1/cpusets.rst | 16 +++++++++++-----
 kernel/cgroup/cpuset.c                          |  2 +-
 kernel/sched/debug.c                            |  1 +
 kernel/sched/topology.c                         |  2 +-
 4 files changed, 14 insertions(+), 7 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/3] sched/fair: allow disabling sched_balance_newidle with sched_relax_domain_level
  2024-03-31 16:01 [PATCH v2 0/3] sched/fair: allow disabling sched_balance_newidle with sched_relax_domain_level Vitalii Bursov
@ 2024-03-31 16:01 ` Vitalii Bursov
  2024-04-01 10:23   ` Vincent Guittot
  2024-03-31 16:01 ` [PATCH v2 2/3] sched/debug: dump domains' level Vitalii Bursov
  2024-03-31 16:01 ` [PATCH v2 3/3] docs: cgroup-v1: clarify that domain levels are system-specific Vitalii Bursov
  2 siblings, 1 reply; 8+ messages in thread
From: Vitalii Bursov @ 2024-03-31 16:01 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, linux-kernel,
	Vitalii Bursov

Change relax_domain_level checks so that it would be possible
to include or exclude all domains from newidle balancing.

This matches the behavior described in the documentation:
  -1   no request. use system default or follow request of others.
   0   no search.
   1   search siblings (hyperthreads in a core).

"2" enables levels 0 and 1, level_max excludes the last (level_max)
level, and level_max+1 includes all levels.

Fixes: 9ae7ab20b483 ("sched/topology: Don't set SD_BALANCE_WAKE on cpuset domain relax")
Signed-off-by: Vitalii Bursov <vitaly@bursov.com>
---
 kernel/cgroup/cpuset.c  | 2 +-
 kernel/sched/topology.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 4237c8748..da24187c4 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -2948,7 +2948,7 @@ bool current_cpuset_is_being_rebound(void)
 static int update_relax_domain_level(struct cpuset *cs, s64 val)
 {
 #ifdef CONFIG_SMP
-	if (val < -1 || val >= sched_domain_level_max)
+	if (val < -1 || val > sched_domain_level_max + 1)
 		return -EINVAL;
 #endif
 
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 63aecd2a7..67a777b31 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1475,7 +1475,7 @@ static void set_domain_attribute(struct sched_domain *sd,
 	} else
 		request = attr->relax_domain_level;
 
-	if (sd->level > request) {
+	if (sd->level >= request) {
 		/* Turn off idle balance on this domain: */
 		sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
 	}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/3] sched/debug: dump domains' level
  2024-03-31 16:01 [PATCH v2 0/3] sched/fair: allow disabling sched_balance_newidle with sched_relax_domain_level Vitalii Bursov
  2024-03-31 16:01 ` [PATCH v2 1/3] " Vitalii Bursov
@ 2024-03-31 16:01 ` Vitalii Bursov
  2024-03-31 16:01 ` [PATCH v2 3/3] docs: cgroup-v1: clarify that domain levels are system-specific Vitalii Bursov
  2 siblings, 0 replies; 8+ messages in thread
From: Vitalii Bursov @ 2024-03-31 16:01 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, linux-kernel,
	Vitalii Bursov

Knowing domain's level exactly can be useful when setting
relax_domain_level or cpuset.sched_relax_domain_level

Usage:
cat /debug/sched/domains/cpu0/domain1/level
to dump cpu0 domain1's level.

Signed-off-by: Vitalii Bursov <vitaly@bursov.com>
---
 kernel/sched/debug.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 8d5d98a58..c1eb9a1af 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -425,6 +425,7 @@ static void register_sd(struct sched_domain *sd, struct dentry *parent)
 
 	debugfs_create_file("flags", 0444, parent, &sd->flags, &sd_flags_fops);
 	debugfs_create_file("groups_flags", 0444, parent, &sd->groups->flags, &sd_flags_fops);
+	debugfs_create_u32("level", 0444, parent, (u32 *)&sd->level);
 }
 
 void update_sched_domain_debugfs(void)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 3/3] docs: cgroup-v1: clarify that domain levels are system-specific
  2024-03-31 16:01 [PATCH v2 0/3] sched/fair: allow disabling sched_balance_newidle with sched_relax_domain_level Vitalii Bursov
  2024-03-31 16:01 ` [PATCH v2 1/3] " Vitalii Bursov
  2024-03-31 16:01 ` [PATCH v2 2/3] sched/debug: dump domains' level Vitalii Bursov
@ 2024-03-31 16:01 ` Vitalii Bursov
  2024-04-01  4:05   ` Shrikanth Hegde
  2 siblings, 1 reply; 8+ messages in thread
From: Vitalii Bursov @ 2024-03-31 16:01 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, linux-kernel,
	Vitalii Bursov

Add a clarification that domain levels are system-specific
and where to check for system details.

Add CPU clusters to the scheduler domain levels table.

Signed-off-by: Vitalii Bursov <vitaly@bursov.com>
---
 Documentation/admin-guide/cgroup-v1/cpusets.rst | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v1/cpusets.rst b/Documentation/admin-guide/cgroup-v1/cpusets.rst
index 7d3415eea..d16a3967d 100644
--- a/Documentation/admin-guide/cgroup-v1/cpusets.rst
+++ b/Documentation/admin-guide/cgroup-v1/cpusets.rst
@@ -568,19 +568,25 @@ on the next tick.  For some applications in special situation, waiting
 
 The 'cpuset.sched_relax_domain_level' file allows you to request changing
 this searching range as you like.  This file takes int value which
-indicates size of searching range in levels ideally as follows,
+indicates size of searching range in levels approximately as follows,
 otherwise initial value -1 that indicates the cpuset has no request.
 
 ====== ===========================================================
   -1   no request. use system default or follow request of others.
    0   no search.
    1   search siblings (hyperthreads in a core).
-   2   search cores in a package.
-   3   search cpus in a node [= system wide on non-NUMA system]
-   4   search nodes in a chunk of node [on NUMA system]
-   5   search system wide [on NUMA system]
+   2   search cpu clusters
+   3   search cores in a package.
+   4   search cpus in a node [= system wide on non-NUMA system]
+   5   search nodes in a chunk of node [on NUMA system]
+   6   search system wide [on NUMA system]
 ====== ===========================================================
 
+Not all levels can be present and values can change depending on the
+system architecture and kernel configuration. Check
+/sys/kernel/debug/sched/domains/cpu*/domain*/ for system-specific
+details.
+
 The system default is architecture dependent.  The system default
 can be changed using the relax_domain_level= boot parameter.
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] docs: cgroup-v1: clarify that domain levels are system-specific
  2024-03-31 16:01 ` [PATCH v2 3/3] docs: cgroup-v1: clarify that domain levels are system-specific Vitalii Bursov
@ 2024-04-01  4:05   ` Shrikanth Hegde
  2024-04-01 10:35     ` Vitalii Bursov
  0 siblings, 1 reply; 8+ messages in thread
From: Shrikanth Hegde @ 2024-04-01  4:05 UTC (permalink / raw)
  To: Vitalii Bursov
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, linux-kernel



On 3/31/24 9:31 PM, Vitalii Bursov wrote:
> Add a clarification that domain levels are system-specific
> and where to check for system details.
> 
> Add CPU clusters to the scheduler domain levels table.
> 
> Signed-off-by: Vitalii Bursov <vitaly@bursov.com>
> ---
>  Documentation/admin-guide/cgroup-v1/cpusets.rst | 16 +++++++++++-----
>  1 file changed, 11 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/admin-guide/cgroup-v1/cpusets.rst b/Documentation/admin-guide/cgroup-v1/cpusets.rst
> index 7d3415eea..d16a3967d 100644
> --- a/Documentation/admin-guide/cgroup-v1/cpusets.rst
> +++ b/Documentation/admin-guide/cgroup-v1/cpusets.rst
> @@ -568,19 +568,25 @@ on the next tick.  For some applications in special situation, waiting
>  
>  The 'cpuset.sched_relax_domain_level' file allows you to request changing
>  this searching range as you like.  This file takes int value which
> -indicates size of searching range in levels ideally as follows,
> +indicates size of searching range in levels approximately as follows,
>  otherwise initial value -1 that indicates the cpuset has no request.
>  
>  ====== ===========================================================
>    -1   no request. use system default or follow request of others.
>     0   no search.
>     1   search siblings (hyperthreads in a core).
> -   2   search cores in a package.
> -   3   search cpus in a node [= system wide on non-NUMA system]
> -   4   search nodes in a chunk of node [on NUMA system]
> -   5   search system wide [on NUMA system]
> +   2   search cpu clusters
> +   3   search cores in a package.
> +   4   search cpus in a node [= system wide on non-NUMA system]
> +   5   search nodes in a chunk of node [on NUMA system]
> +   6   search system wide [on NUMA system]

I think above block of documentation need not change. SD_CLUSTER is a software 
construct, not a sched domain per se. 

IMO the next paragraph that is added is good enough and the above change can be removed.

>  ====== ===========================================================
>  
> +Not all levels can be present and values can change depending on the
> +system architecture and kernel configuration. Check
> +/sys/kernel/debug/sched/domains/cpu*/domain*/ for system-specific
> +details.
> +
>  The system default is architecture dependent.  The system default
>  can be changed using the relax_domain_level= boot parameter.
>  

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] sched/fair: allow disabling sched_balance_newidle with sched_relax_domain_level
  2024-03-31 16:01 ` [PATCH v2 1/3] " Vitalii Bursov
@ 2024-04-01 10:23   ` Vincent Guittot
  0 siblings, 0 replies; 8+ messages in thread
From: Vincent Guittot @ 2024-04-01 10:23 UTC (permalink / raw)
  To: Vitalii Bursov
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, linux-kernel

On Sun, 31 Mar 2024 at 18:02, Vitalii Bursov <vitaly@bursov.com> wrote:
>
> Change relax_domain_level checks so that it would be possible
> to include or exclude all domains from newidle balancing.
>
> This matches the behavior described in the documentation:
>   -1   no request. use system default or follow request of others.
>    0   no search.
>    1   search siblings (hyperthreads in a core).
>
> "2" enables levels 0 and 1, level_max excludes the last (level_max)
> level, and level_max+1 includes all levels.
>
> Fixes: 9ae7ab20b483 ("sched/topology: Don't set SD_BALANCE_WAKE on cpuset domain relax")
> Signed-off-by: Vitalii Bursov <vitaly@bursov.com>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

> ---
>  kernel/cgroup/cpuset.c  | 2 +-
>  kernel/sched/topology.c | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 4237c8748..da24187c4 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -2948,7 +2948,7 @@ bool current_cpuset_is_being_rebound(void)
>  static int update_relax_domain_level(struct cpuset *cs, s64 val)
>  {
>  #ifdef CONFIG_SMP
> -       if (val < -1 || val >= sched_domain_level_max)
> +       if (val < -1 || val > sched_domain_level_max + 1)
>                 return -EINVAL;
>  #endif
>
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 63aecd2a7..67a777b31 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1475,7 +1475,7 @@ static void set_domain_attribute(struct sched_domain *sd,
>         } else
>                 request = attr->relax_domain_level;
>
> -       if (sd->level > request) {
> +       if (sd->level >= request) {
>                 /* Turn off idle balance on this domain: */
>                 sd->flags &= ~(SD_BALANCE_WAKE|SD_BALANCE_NEWIDLE);
>         }
> --
> 2.20.1
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] docs: cgroup-v1: clarify that domain levels are system-specific
  2024-04-01  4:05   ` Shrikanth Hegde
@ 2024-04-01 10:35     ` Vitalii Bursov
  2024-04-01 13:30       ` Shrikanth Hegde
  0 siblings, 1 reply; 8+ messages in thread
From: Vitalii Bursov @ 2024-04-01 10:35 UTC (permalink / raw)
  To: Shrikanth Hegde
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, linux-kernel



On 01.04.24 07:05, Shrikanth Hegde wrote:
> 
> 
> On 3/31/24 9:31 PM, Vitalii Bursov wrote:
>> Add a clarification that domain levels are system-specific
>> and where to check for system details.
>>
>> Add CPU clusters to the scheduler domain levels table.
>>
>> Signed-off-by: Vitalii Bursov <vitaly@bursov.com>
>> ---
>>  Documentation/admin-guide/cgroup-v1/cpusets.rst | 16 +++++++++++-----
>>  1 file changed, 11 insertions(+), 5 deletions(-)
>>
>> diff --git a/Documentation/admin-guide/cgroup-v1/cpusets.rst b/Documentation/admin-guide/cgroup-v1/cpusets.rst
>> index 7d3415eea..d16a3967d 100644
>> --- a/Documentation/admin-guide/cgroup-v1/cpusets.rst
>> +++ b/Documentation/admin-guide/cgroup-v1/cpusets.rst
>> @@ -568,19 +568,25 @@ on the next tick.  For some applications in special situation, waiting
>>  
>>  The 'cpuset.sched_relax_domain_level' file allows you to request changing
>>  this searching range as you like.  This file takes int value which
>> -indicates size of searching range in levels ideally as follows,
>> +indicates size of searching range in levels approximately as follows,
>>  otherwise initial value -1 that indicates the cpuset has no request.
>>  
>>  ====== ===========================================================
>>    -1   no request. use system default or follow request of others.
>>     0   no search.
>>     1   search siblings (hyperthreads in a core).
>> -   2   search cores in a package.
>> -   3   search cpus in a node [= system wide on non-NUMA system]
>> -   4   search nodes in a chunk of node [on NUMA system]
>> -   5   search system wide [on NUMA system]
>> +   2   search cpu clusters
>> +   3   search cores in a package.
>> +   4   search cpus in a node [= system wide on non-NUMA system]
>> +   5   search nodes in a chunk of node [on NUMA system]
>> +   6   search system wide [on NUMA system]
> 
> I think above block of documentation need not change. SD_CLUSTER is a software 
> construct, not a sched domain per se. 
> 

I added "cpu clusters" because the original table:
====== ===========================================================
  -1   no request. use system default or follow request of others.
   0   no search.
   1   search siblings (hyperthreads in a core).
   2   search cores in a package.
   3   search cpus in a node [= system wide on non-NUMA system]
   4   search nodes in a chunk of node [on NUMA system]
   5   search system wide [on NUMA system]
====== ===========================================================
does not match to what I see on a few systems I checked.

AMD Ryzen and the same dual-CPU Intel server with NUMA disabled:
  level:0 - SMT
  level:2 - MC
  level:3 - PKG

Server with NUMA enabled:
  level:0 - SMT
  level:2 - MC
  level:5 - NUMA

So, for the relax level original table:
  1 -> enables 0 SMP -> OK
  2 -> enables 1 unknown -> does not enable cores in a package
  3 -> enables 2 MC -> OK for NUMA, but not system wide on non-NUMA system
  5 -> enables 4 unknown -> does not enable system wide on NUMA

The updated table
====== ===========================================================
  -1   no request. use system default or follow request of others.
   0   no search.
   1   search siblings (hyperthreads in a core).
   2   search cpu clusters
   3   search cores in a package.
   4   search cpus in a node [= system wide on non-NUMA system]
   5   search nodes in a chunk of node [on NUMA system]
   6   search system wide [on NUMA system]
====== ===========================================================
would work like this:
  1 -> enables 0 SMP -> OK
  2 -> enables 1 unknown -> does nothing new
  3 -> enables 2 MC -> OK, cores in a package for NUMA and non-NUMA system
  4 -> enables 3 PKG -> OK on non-NUMA system
  6 -> enables 5 NUMA -> OK

I think it would look more correct on "average" systems, but anyway,
please confirm and I'll remove the table update in an updated patch.

Thanks

> IMO the next paragraph that is added is good enough and the above change can be removed.

>>  ====== ===========================================================
>>  
>> +Not all levels can be present and values can change depending on the
>> +system architecture and kernel configuration. Check
>> +/sys/kernel/debug/sched/domains/cpu*/domain*/ for system-specific
>> +details.
>> +
>>  The system default is architecture dependent.  The system default
>>  can be changed using the relax_domain_level= boot parameter.
>>  

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] docs: cgroup-v1: clarify that domain levels are system-specific
  2024-04-01 10:35     ` Vitalii Bursov
@ 2024-04-01 13:30       ` Shrikanth Hegde
  0 siblings, 0 replies; 8+ messages in thread
From: Shrikanth Hegde @ 2024-04-01 13:30 UTC (permalink / raw)
  To: Vitalii Bursov
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, linux-kernel



On 4/1/24 4:05 PM, Vitalii Bursov wrote:
> 
> 
> On 01.04.24 07:05, Shrikanth Hegde wrote:
>>
>>
>> On 3/31/24 9:31 PM, Vitalii Bursov wrote:
>>> Add a clarification that domain levels are system-specific
>>> and where to check for system details.
>>>
>>> Add CPU clusters to the scheduler domain levels table.
>>>
>>> Signed-off-by: Vitalii Bursov <vitaly@bursov.com>
>>> ---
>>>  Documentation/admin-guide/cgroup-v1/cpusets.rst | 16 +++++++++++-----
>>>  1 file changed, 11 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/Documentation/admin-guide/cgroup-v1/cpusets.rst b/Documentation/admin-guide/cgroup-v1/cpusets.rst
>>> index 7d3415eea..d16a3967d 100644
>>> --- a/Documentation/admin-guide/cgroup-v1/cpusets.rst
>>> +++ b/Documentation/admin-guide/cgroup-v1/cpusets.rst
>>> @@ -568,19 +568,25 @@ on the next tick.  For some applications in special situation, waiting
>>>  
>>>  The 'cpuset.sched_relax_domain_level' file allows you to request changing
>>>  this searching range as you like.  This file takes int value which
>>> -indicates size of searching range in levels ideally as follows,
>>> +indicates size of searching range in levels approximately as follows,
>>>  otherwise initial value -1 that indicates the cpuset has no request.
>>>  
>>>  ====== ===========================================================
>>>    -1   no request. use system default or follow request of others.
>>>     0   no search.
>>>     1   search siblings (hyperthreads in a core).
>>> -   2   search cores in a package.
>>> -   3   search cpus in a node [= system wide on non-NUMA system]
>>> -   4   search nodes in a chunk of node [on NUMA system]
>>> -   5   search system wide [on NUMA system]
>>> +   2   search cpu clusters
>>> +   3   search cores in a package.
>>> +   4   search cpus in a node [= system wide on non-NUMA system]
>>> +   5   search nodes in a chunk of node [on NUMA system]
>>> +   6   search system wide [on NUMA system]
>>
>> I think above block of documentation need not change. SD_CLUSTER is a software 
>> construct, not a sched domain per se. 
>>
> 
> I added "cpu clusters" because the original table:
> ====== ===========================================================
>   -1   no request. use system default or follow request of others.
>    0   no search.
>    1   search siblings (hyperthreads in a core).
>    2   search cores in a package.
>    3   search cpus in a node [= system wide on non-NUMA system]
>    4   search nodes in a chunk of node [on NUMA system]
>    5   search system wide [on NUMA system]
> ====== ===========================================================
> does not match to what I see on a few systems I checked.
> 
> AMD Ryzen and the same dual-CPU Intel server with NUMA disabled:
>   level:0 - SMT
>   level:2 - MC
>   level:3 - PKG
> 
> Server with NUMA enabled:
>   level:0 - SMT
>   level:2 - MC
>   level:5 - NUMA
> 

None of these are "cpu clusters". 

From what i know, the description for the above are.
SMT - multi-threads/hyperthreads
MC - Multi-Core 
PKG - Package/Socket level 
NUMA - Node level. When you enable, PKG gets degenerated since pkg mask and numa mask would 
have been same. 

 

> So, for the relax level original table:
>   1 -> enables 0 SMP -> OK
>   2 -> enables 1 unknown -> does not enable cores in a package
>   3 -> enables 2 MC -> OK for NUMA, but not system wide on non-NUMA system
>   5 -> enables 4 unknown -> does not enable system wide on NUMA
> 
> The updated table
> ====== ===========================================================
>   -1   no request. use system default or follow request of others.
>    0   no search.
>    1   search siblings (hyperthreads in a core).
>    2   search cpu clusters
>    3   search cores in a package.
>    4   search cpus in a node [= system wide on non-NUMA system]
>    5   search nodes in a chunk of node [on NUMA system]
>    6   search system wide [on NUMA system]
> ====== ===========================================================
> would work like this:
>   1 -> enables 0 SMP -> OK
>   2 -> enables 1 unknown -> does nothing new
>   3 -> enables 2 MC -> OK, cores in a package for NUMA and non-NUMA system
>   4 -> enables 3 PKG -> OK on non-NUMA system

It wont, PKG domain itself wont be there. It gets removed.

>   6 -> enables 5 NUMA -> OK
> 
> I think it would look more correct on "average" systems, but anyway,
> please confirm and I'll remove the table update in an updated patch.
> 

IMHO, the table need not get updated. Just adding a paragraph pointing 
to refer to the sysfs files is good enough. 


> Thanks
> 
>> IMO the next paragraph that is added is good enough and the above change can be removed.
> 
>>>  ====== ===========================================================
>>>  
>>> +Not all levels can be present and values can change depending on the
>>> +system architecture and kernel configuration. Check
>>> +/sys/kernel/debug/sched/domains/cpu*/domain*/ for system-specific
>>> +details.
>>> +
>>>  The system default is architecture dependent.  The system default
>>>  can be changed using the relax_domain_level= boot parameter.
>>>  

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-04-01 13:30 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-31 16:01 [PATCH v2 0/3] sched/fair: allow disabling sched_balance_newidle with sched_relax_domain_level Vitalii Bursov
2024-03-31 16:01 ` [PATCH v2 1/3] " Vitalii Bursov
2024-04-01 10:23   ` Vincent Guittot
2024-03-31 16:01 ` [PATCH v2 2/3] sched/debug: dump domains' level Vitalii Bursov
2024-03-31 16:01 ` [PATCH v2 3/3] docs: cgroup-v1: clarify that domain levels are system-specific Vitalii Bursov
2024-04-01  4:05   ` Shrikanth Hegde
2024-04-01 10:35     ` Vitalii Bursov
2024-04-01 13:30       ` Shrikanth Hegde

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox