public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH -next] sched/fair: inline cpu_util_without and cpu_util to improve performance
@ 2024-07-23  7:36 Li Zetao
  2024-07-24 10:53 ` Peter Zijlstra
  0 siblings, 1 reply; 3+ messages in thread
From: Li Zetao @ 2024-07-23  7:36 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, vschneid
  Cc: lizetao1, zhangqiao22, linux-kernel

The commit 3eb6d6ececca ("sched/fair: Refactor CPU utilization functions")
refactored cpu_util_without and cpu_util functions. Since the size of
cpu_util function has increased, the inline cpu_util is dropped. This had
a negative impact on performance, in the scenario of updating
sched_group's statistics, cpu_util_without and cpu_util functions are on
the hotspot path.

Inlining cpu_util_without and cpu_util functions have been shown to
significantly improve performance in lmbench as follow:

  Machine: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
                     before          after          diff
  fork+exit          317.0625        303.6667       -4.22%
  fork+execve        1482.5000       1407.0000      -5.09%
  fork+/bin/sh       2096.0000       2020.3333      -3.61%

This patch introduces inlining to cpu_util_without and cpu_util functions.
While this increases the size of kernel/sched/fair.o, the performance
gains in critical workloads make this an acceptable trade-off.

Size comparison before and after patch:
     text	   data	    bss	    dec	    hex	filename
   0x1264a	 0x1506	   0xb0	  80896	  13c00	kernel/sched/fair.o.before
   0x12672	 0x14fe	   0xb0	  80928	  13c20	kernel/sched/fair.o.after

Signed-off-by: Zhang Qiao <zhangqiao22@huawei.com>
Signed-off-by: Li Zetao <lizetao1@huawei.com>
---
 kernel/sched/fair.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5904405ffc59..677b78fa65b6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7706,7 +7706,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
  *
  * Return: (Boosted) (estimated) utilization for the specified CPU.
  */
-static unsigned long
+static __always_inline unsigned long
 cpu_util(int cpu, struct task_struct *p, int dst_cpu, int boost)
 {
 	struct cfs_rq *cfs_rq = &cpu_rq(cpu)->cfs;
@@ -7794,7 +7794,7 @@ unsigned long cpu_util_cfs_boost(int cpu)
  * utilization of the specified task, whenever the task is currently
  * contributing to the CPU utilization.
  */
-static unsigned long cpu_util_without(int cpu, struct task_struct *p)
+static __always_inline unsigned long cpu_util_without(int cpu, struct task_struct *p)
 {
 	/* Task has no contribution or is new */
 	if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-07-25 14:16 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-23  7:36 [PATCH -next] sched/fair: inline cpu_util_without and cpu_util to improve performance Li Zetao
2024-07-24 10:53 ` Peter Zijlstra
2024-07-25 14:16   ` Li Zetao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox