From: kernel test robot <oliver.sang@intel.com>
To: Tim Chen <tim.c.chen@linux.intel.com>
Cc: <oe-lkp@lists.linux.dev>, <lkp@intel.com>,
K Prateek Nayak <kprateek.nayak@amd.com>,
Tim Chen <tim.c.chen@linux.intel.com>, <linux-mm@kvack.org>,
<linux-kernel@vger.kernel.org>, <aubrey.li@linux.intel.com>,
<yu.c.chen@intel.com>, Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
"Gautham R . Shenoy" <gautham.shenoy@amd.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
"Juri Lelli" <juri.lelli@redhat.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>, "Mel Gorman" <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>,
"Madadi Vineeth Reddy" <vineethr@linux.ibm.com>,
Hillf Danton <hdanton@sina.com>,
Shrikanth Hegde <sshegde@linux.ibm.com>,
Jianyong Wu <jianyong.wu@outlook.com>,
Yangyu Chen <cyy@cyyself.name>,
Tingyin Duan <tingyin.duan@gmail.com>,
Vern Hao <vernhao@tencent.com>, Len Brown <len.brown@intel.com>,
Aubrey Li <aubrey.li@intel.com>, Zhao Liu <zhao1.liu@intel.com>,
Chen Yu <yu.chen.surf@gmail.com>,
Libo Chen <libo.chen@oracle.com>,
Adam Li <adamli@os.amperecomputing.com>,
Tim Chen <tim.c.chen@intel.com>, <oliver.sang@intel.com>
Subject: Re: [PATCH 16/19] sched/fair: Exclude processes with many threads from cache-aware scheduling
Date: Thu, 23 Oct 2025 15:22:11 +0800 [thread overview]
Message-ID: <202510231406.30bc8aec-lkp@intel.com> (raw)
In-Reply-To: <637cdb8ab11b1b978d697ed744cc402d32443ecc.1760206683.git.tim.c.chen@linux.intel.com>
Hello,
kernel test robot noticed a 2.1% regression of will-it-scale.per_thread_ops on:
commit: cb57b28051ef1d84e7cb14db4e1ab99b4f33b4b5 ("[PATCH 16/19] sched/fair: Exclude processes with many threads from cache-aware scheduling")
url: https://github.com/intel-lab-lkp/linux/commits/Tim-Chen/sched-fair-Add-infrastructure-for-cache-aware-load-balancing/20251012-022248
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 45b7f780739a3145aeef24d2dfa02517a6c82ed6
patch link: https://lore.kernel.org/all/637cdb8ab11b1b978d697ed744cc402d32443ecc.1760206683.git.tim.c.chen@linux.intel.com/
patch subject: [PATCH 16/19] sched/fair: Exclude processes with many threads from cache-aware scheduling
testcase: will-it-scale
config: x86_64-rhel-9.4
compiler: gcc-14
test machine: 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz (Ivy Bridge-EP) with 64G memory
parameters:
nr_task: 100%
mode: thread
test: tlb_flush2
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202510231406.30bc8aec-lkp@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20251023/202510231406.30bc8aec-lkp@intel.com
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-14/performance/x86_64-rhel-9.4/thread/100%/debian-13-x86_64-20250902.cgz/lkp-ivb-2ep2/tlb_flush2/will-it-scale
commit:
4ac141e433 ("sched/fair: Respect LLC preference in task migration and detach")
cb57b28051 ("sched/fair: Exclude processes with many threads from cache-aware scheduling")
4ac141e4330723c0 cb57b28051ef1d84e7cb14db4e1
---------------- ---------------------------
%stddev %change %stddev
\ | \
1482496 -2.1% 1451299 will-it-scale.48.threads
30884 -2.1% 30235 will-it-scale.per_thread_ops
1482496 -2.1% 1451299 will-it-scale.workload
4.447e+08 -2.1% 4.355e+08 proc-vmstat.numa_hit
4.447e+08 -2.1% 4.355e+08 proc-vmstat.numa_local
4.447e+08 -2.1% 4.354e+08 proc-vmstat.pgalloc_normal
8.884e+08 -2.1% 8.698e+08 proc-vmstat.pgfault
4.446e+08 -2.1% 4.353e+08 proc-vmstat.pgfree
6.446e+09 -2.0% 6.318e+09 perf-stat.i.branch-instructions
1.462e+08 -1.4% 1.441e+08 perf-stat.i.branch-misses
1.467e+08 -1.6% 1.444e+08 perf-stat.i.cache-misses
7.692e+08 -1.4% 7.587e+08 perf-stat.i.cache-references
101348 ± 2% +3.6% 104965 perf-stat.i.context-switches
4.14 +1.9% 4.22 perf-stat.i.cpi
883.41 +1.4% 896.20 perf-stat.i.cycles-between-cache-misses
3.083e+10 -2.0% 3.022e+10 perf-stat.i.instructions
0.24 -1.8% 0.24 perf-stat.i.ipc
124.71 -2.0% 122.18 perf-stat.i.metric.K/sec
2944589 -2.1% 2882055 perf-stat.i.minor-faults
2944589 -2.1% 2882055 perf-stat.i.page-faults
4.17 +1.9% 4.25 perf-stat.overall.cpi
876.76 +1.5% 889.96 perf-stat.overall.cycles-between-cache-misses
0.24 -1.8% 0.24 perf-stat.overall.ipc
6.417e+09 -2.0% 6.29e+09 perf-stat.ps.branch-instructions
1.455e+08 -1.4% 1.434e+08 perf-stat.ps.branch-misses
1.46e+08 -1.6% 1.436e+08 perf-stat.ps.cache-misses
7.653e+08 -1.4% 7.549e+08 perf-stat.ps.cache-references
100692 ± 2% +3.6% 104309 perf-stat.ps.context-switches
3.069e+10 -2.0% 3.008e+10 perf-stat.ps.instructions
2931887 -2.1% 2869944 perf-stat.ps.minor-faults
2931887 -2.1% 2869944 perf-stat.ps.page-faults
9.273e+12 -1.9% 9.096e+12 perf-stat.total.instructions
62.03 -1.8 60.18 perf-profile.calltrace.cycles-pp.on_each_cpu_cond_mask.flush_tlb_mm_range.tlb_finish_mmu.do_madvise.__x64_sys_madvise
63.66 -1.8 61.82 perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.tlb_finish_mmu.do_madvise.__x64_sys_madvise.do_syscall_64
61.19 -1.8 59.36 perf-profile.calltrace.cycles-pp.smp_call_function_many_cond.on_each_cpu_cond_mask.flush_tlb_mm_range.tlb_finish_mmu.do_madvise
65.49 -1.7 63.79 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
75.54 -1.5 74.02 perf-profile.calltrace.cycles-pp.__madvise
71.89 -1.5 70.41 perf-profile.calltrace.cycles-pp.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
72.40 -1.5 70.92 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__madvise
71.83 -1.5 70.35 perf-profile.calltrace.cycles-pp.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
72.35 -1.5 70.87 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
15.31 -0.6 14.70 perf-profile.calltrace.cycles-pp.asm_sysvec_call_function.smp_call_function_many_cond.on_each_cpu_cond_mask.flush_tlb_mm_range.tlb_finish_mmu
12.04 -0.5 11.52 perf-profile.calltrace.cycles-pp.sysvec_call_function.asm_sysvec_call_function.smp_call_function_many_cond.on_each_cpu_cond_mask.flush_tlb_mm_range
10.97 -0.5 10.47 perf-profile.calltrace.cycles-pp.__flush_smp_call_function_queue.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function.smp_call_function_many_cond
11.08 -0.5 10.58 perf-profile.calltrace.cycles-pp.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function.smp_call_function_many_cond.on_each_cpu_cond_mask
4.36 -0.2 4.15 perf-profile.calltrace.cycles-pp.flush_tlb_func.__flush_smp_call_function_queue.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function
4.53 -0.2 4.34 perf-profile.calltrace.cycles-pp.llist_reverse_order.__flush_smp_call_function_queue.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function
2.02 ± 2% -0.1 1.95 perf-profile.calltrace.cycles-pp.lock_vma_under_rcu.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
1.47 -0.1 1.42 perf-profile.calltrace.cycles-pp.folio_add_lru.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.83 ± 2% -0.0 0.80 perf-profile.calltrace.cycles-pp.asm_sysvec_call_function.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
0.65 ± 2% -0.0 0.62 perf-profile.calltrace.cycles-pp.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function.do_user_addr_fault.exc_page_fault
0.73 ± 2% -0.0 0.70 perf-profile.calltrace.cycles-pp.sysvec_call_function.asm_sysvec_call_function.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.64 ± 2% -0.0 0.61 ± 2% perf-profile.calltrace.cycles-pp.__flush_smp_call_function_queue.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function.do_user_addr_fault
0.73 -0.0 0.71 perf-profile.calltrace.cycles-pp.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function.testcase
0.84 -0.0 0.82 perf-profile.calltrace.cycles-pp.clear_page_erms.prep_new_page.get_page_from_freelist.__alloc_frozen_pages_noprof.alloc_pages_mpol
1.65 +0.0 1.68 perf-profile.calltrace.cycles-pp.__alloc_frozen_pages_noprof.alloc_pages_mpol.vma_alloc_folio_noprof.alloc_anon_folio.do_anonymous_page
1.92 +0.0 1.96 perf-profile.calltrace.cycles-pp.vma_alloc_folio_noprof.alloc_anon_folio.do_anonymous_page.__handle_mm_fault.handle_mm_fault
1.79 +0.0 1.83 perf-profile.calltrace.cycles-pp.alloc_pages_mpol.vma_alloc_folio_noprof.alloc_anon_folio.do_anonymous_page.__handle_mm_fault
0.92 ± 3% +0.1 1.04 perf-profile.calltrace.cycles-pp.tlb_gather_mmu.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.83 ± 6% +0.2 3.04 ± 2% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
7.06 +0.4 7.48 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
6.25 +0.4 6.70 perf-profile.calltrace.cycles-pp.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
2.48 +0.5 3.02 perf-profile.calltrace.cycles-pp.alloc_anon_folio.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +0.7 0.74 ± 5% perf-profile.calltrace.cycles-pp.get_mem_cgroup_from_mm.__mem_cgroup_charge.alloc_anon_folio.do_anonymous_page.__handle_mm_fault
0.00 +0.9 0.94 ± 4% perf-profile.calltrace.cycles-pp.__mem_cgroup_charge.alloc_anon_folio.do_anonymous_page.__handle_mm_fault.handle_mm_fault
19.08 +1.3 20.36 perf-profile.calltrace.cycles-pp.testcase
14.17 +1.3 15.46 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.testcase
12.74 +1.4 14.10 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.testcase
12.49 +1.4 13.85 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
7.97 +1.4 9.38 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
62.25 -1.9 60.39 perf-profile.children.cycles-pp.smp_call_function_many_cond
62.26 -1.9 60.40 perf-profile.children.cycles-pp.on_each_cpu_cond_mask
63.94 -1.8 62.08 perf-profile.children.cycles-pp.flush_tlb_mm_range
65.76 -1.7 64.06 perf-profile.children.cycles-pp.tlb_finish_mmu
75.72 -1.5 74.19 perf-profile.children.cycles-pp.__madvise
73.51 -1.5 72.02 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
73.48 -1.5 71.99 perf-profile.children.cycles-pp.do_syscall_64
71.90 -1.5 70.41 perf-profile.children.cycles-pp.__x64_sys_madvise
71.85 -1.5 70.36 perf-profile.children.cycles-pp.do_madvise
18.46 -0.3 18.12 perf-profile.children.cycles-pp.__flush_smp_call_function_queue
19.79 -0.3 19.47 perf-profile.children.cycles-pp.sysvec_call_function
22.68 -0.3 22.36 perf-profile.children.cycles-pp.asm_sysvec_call_function
17.97 -0.3 17.65 perf-profile.children.cycles-pp.__sysvec_call_function
7.64 -0.2 7.46 perf-profile.children.cycles-pp.flush_tlb_func
7.49 -0.1 7.39 perf-profile.children.cycles-pp.llist_reverse_order
1.47 -0.1 1.42 perf-profile.children.cycles-pp.folio_add_lru
1.99 -0.0 1.94 perf-profile.children.cycles-pp.__pte_offset_map_lock
1.84 -0.0 1.80 perf-profile.children.cycles-pp._raw_spin_lock
1.31 -0.0 1.27 perf-profile.children.cycles-pp.folio_batch_move_lru
0.93 -0.0 0.90 perf-profile.children.cycles-pp.error_entry
0.42 -0.0 0.40 perf-profile.children.cycles-pp.vms_clear_ptes
0.89 -0.0 0.87 perf-profile.children.cycles-pp.clear_page_erms
0.94 -0.0 0.92 perf-profile.children.cycles-pp.prep_new_page
1.66 +0.0 1.69 perf-profile.children.cycles-pp.__alloc_frozen_pages_noprof
1.80 +0.0 1.84 perf-profile.children.cycles-pp.alloc_pages_mpol
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__pi_memset
0.96 ± 3% +0.1 1.08 perf-profile.children.cycles-pp.tlb_gather_mmu
2.90 ± 6% +0.2 3.10 perf-profile.children.cycles-pp.intel_idle
3.23 ± 5% +0.2 3.45 perf-profile.children.cycles-pp.cpuidle_enter
3.32 ± 5% +0.2 3.54 perf-profile.children.cycles-pp.cpuidle_idle_call
7.09 +0.4 7.51 perf-profile.children.cycles-pp.__handle_mm_fault
6.27 +0.4 6.72 perf-profile.children.cycles-pp.do_anonymous_page
0.43 ± 6% +0.5 0.94 ± 4% perf-profile.children.cycles-pp.__mem_cgroup_charge
0.25 ± 11% +0.5 0.76 ± 5% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
2.49 +0.5 3.03 perf-profile.children.cycles-pp.alloc_anon_folio
19.67 +1.3 20.94 perf-profile.children.cycles-pp.testcase
14.47 +1.3 15.77 perf-profile.children.cycles-pp.asm_exc_page_fault
12.76 +1.4 14.12 perf-profile.children.cycles-pp.exc_page_fault
12.60 +1.4 13.96 perf-profile.children.cycles-pp.do_user_addr_fault
7.99 +1.5 9.44 perf-profile.children.cycles-pp.handle_mm_fault
42.94 -1.2 41.71 perf-profile.self.cycles-pp.smp_call_function_many_cond
6.02 -0.2 5.87 perf-profile.self.cycles-pp.flush_tlb_func
7.46 -0.1 7.36 perf-profile.self.cycles-pp.llist_reverse_order
1.44 -0.0 1.40 perf-profile.self.cycles-pp.lock_vma_under_rcu
0.88 -0.0 0.85 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.91 -0.0 0.88 perf-profile.self.cycles-pp.error_entry
0.07 +0.0 0.08 ± 4% perf-profile.self.cycles-pp.get_page_from_freelist
0.76 ± 2% +0.1 0.86 perf-profile.self.cycles-pp.tlb_gather_mmu
1.10 ± 5% +0.1 1.24 ± 2% perf-profile.self.cycles-pp.tlb_finish_mmu
2.90 ± 6% +0.2 3.10 perf-profile.self.cycles-pp.intel_idle
0.20 ± 10% +0.4 0.62 ± 6% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.15 ± 4% +0.8 1.00 ± 3% perf-profile.self.cycles-pp.handle_mm_fault
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
next prev parent reply other threads:[~2025-10-23 7:22 UTC|newest]
Thread overview: 116+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-11 18:24 [PATCH 00/19] Cache Aware Scheduling Tim Chen
2025-10-11 18:24 ` [PATCH 01/19] sched/fair: Add infrastructure for cache-aware load balancing Tim Chen
2025-10-14 19:12 ` Madadi Vineeth Reddy
2025-10-15 4:54 ` Chen, Yu C
2025-10-15 19:32 ` Tim Chen
2025-10-16 3:11 ` Chen, Yu C
2025-10-15 11:54 ` Peter Zijlstra
2025-10-15 16:07 ` Chen, Yu C
2025-10-23 7:26 ` kernel test robot
2025-10-27 4:47 ` K Prateek Nayak
2025-10-27 13:35 ` Chen, Yu C
2025-10-11 18:24 ` [PATCH 02/19] sched/fair: Record per-LLC utilization to guide cache-aware scheduling decisions Tim Chen
2025-10-15 10:15 ` Peter Zijlstra
2025-10-15 16:27 ` Chen, Yu C
2025-10-27 5:01 ` K Prateek Nayak
2025-10-27 14:07 ` Chen, Yu C
2025-10-28 2:50 ` K Prateek Nayak
2025-10-11 18:24 ` [PATCH 03/19] sched/fair: Introduce helper functions to enforce LLC migration policy Tim Chen
2025-10-11 18:24 ` [PATCH 04/19] sched/fair: Introduce a static key to enable cache aware only for multi LLCs Tim Chen
2025-10-15 11:04 ` Peter Zijlstra
2025-10-15 16:25 ` Chen, Yu C
2025-10-15 16:36 ` Shrikanth Hegde
2025-10-15 17:01 ` Chen, Yu C
2025-10-16 7:42 ` Peter Zijlstra
2025-10-17 2:08 ` Chen, Yu C
2025-10-16 7:40 ` Peter Zijlstra
2025-10-27 5:42 ` K Prateek Nayak
2025-10-27 12:56 ` Chen, Yu C
2025-10-27 23:36 ` Tim Chen
2025-10-29 12:36 ` Chen, Yu C
2025-10-28 2:46 ` K Prateek Nayak
2025-10-11 18:24 ` [PATCH 05/19] sched/fair: Add LLC index mapping for CPUs Tim Chen
2025-10-15 11:08 ` Peter Zijlstra
2025-10-15 11:58 ` Peter Zijlstra
2025-10-15 20:12 ` Tim Chen
2025-10-11 18:24 ` [PATCH 06/19] sched/fair: Assign preferred LLC ID to processes Tim Chen
2025-10-14 5:16 ` Chen, Yu C
2025-10-15 11:15 ` Peter Zijlstra
2025-10-16 3:13 ` Chen, Yu C
2025-10-17 4:50 ` Chen, Yu C
2025-10-20 9:41 ` Vern Hao
2025-10-11 18:24 ` [PATCH 07/19] sched/fair: Track LLC-preferred tasks per runqueue Tim Chen
2025-10-15 12:05 ` Peter Zijlstra
2025-10-15 20:03 ` Tim Chen
2025-10-16 7:44 ` Peter Zijlstra
2025-10-16 20:06 ` Tim Chen
2025-10-27 6:04 ` K Prateek Nayak
2025-10-28 15:15 ` Chen, Yu C
2025-10-28 15:46 ` Tim Chen
2025-10-29 4:32 ` K Prateek Nayak
2025-10-29 12:48 ` Chen, Yu C
2025-10-29 4:00 ` K Prateek Nayak
2025-10-28 17:06 ` Tim Chen
2025-10-11 18:24 ` [PATCH 08/19] sched/fair: Introduce per runqueue task LLC preference counter Tim Chen
2025-10-15 12:21 ` Peter Zijlstra
2025-10-15 20:41 ` Tim Chen
2025-10-16 7:49 ` Peter Zijlstra
2025-10-21 8:28 ` Madadi Vineeth Reddy
2025-10-23 6:07 ` Chen, Yu C
2025-10-11 18:24 ` [PATCH 09/19] sched/fair: Count tasks prefering each LLC in a sched group Tim Chen
2025-10-15 12:22 ` Peter Zijlstra
2025-10-15 20:42 ` Tim Chen
2025-10-15 12:25 ` Peter Zijlstra
2025-10-15 20:43 ` Tim Chen
2025-10-27 8:33 ` K Prateek Nayak
2025-10-27 23:19 ` Tim Chen
2025-10-11 18:24 ` [PATCH 10/19] sched/fair: Prioritize tasks preferring destination LLC during balancing Tim Chen
2025-10-15 7:23 ` kernel test robot
2025-10-15 15:08 ` Peter Zijlstra
2025-10-15 21:28 ` Tim Chen
2025-10-15 15:10 ` Peter Zijlstra
2025-10-15 16:03 ` Chen, Yu C
2025-10-24 9:32 ` Aaron Lu
2025-10-27 2:00 ` Chen, Yu C
2025-10-29 9:51 ` Aaron Lu
2025-10-29 13:19 ` Chen, Yu C
2025-10-27 6:29 ` K Prateek Nayak
2025-10-28 12:11 ` Chen, Yu C
2025-10-11 18:24 ` [PATCH 11/19] sched/fair: Identify busiest sched_group for LLC-aware load balancing Tim Chen
2025-10-15 15:24 ` Peter Zijlstra
2025-10-15 21:18 ` Tim Chen
2025-10-11 18:24 ` [PATCH 12/19] sched/fair: Add migrate_llc_task migration type for cache-aware balancing Tim Chen
2025-10-27 9:04 ` K Prateek Nayak
2025-10-27 22:59 ` Tim Chen
2025-10-11 18:24 ` [PATCH 13/19] sched/fair: Handle moving single tasks to/from their preferred LLC Tim Chen
2025-10-11 18:24 ` [PATCH 14/19] sched/fair: Consider LLC preference when selecting tasks for load balancing Tim Chen
2025-10-11 18:24 ` [PATCH 15/19] sched/fair: Respect LLC preference in task migration and detach Tim Chen
2025-10-28 6:02 ` K Prateek Nayak
2025-10-28 11:58 ` Chen, Yu C
2025-10-28 15:30 ` Tim Chen
2025-10-29 4:15 ` K Prateek Nayak
2025-10-29 3:54 ` K Prateek Nayak
2025-10-29 14:23 ` Chen, Yu C
2025-10-29 21:09 ` Tim Chen
2025-10-30 4:19 ` K Prateek Nayak
2025-10-30 20:07 ` Tim Chen
2025-10-31 3:32 ` K Prateek Nayak
2025-10-31 15:17 ` Chen, Yu C
2025-11-03 21:41 ` Tim Chen
2025-11-03 22:07 ` Tim Chen
2025-10-11 18:24 ` [PATCH 16/19] sched/fair: Exclude processes with many threads from cache-aware scheduling Tim Chen
2025-10-23 7:22 ` kernel test robot [this message]
2025-10-11 18:24 ` [PATCH 17/19] sched/fair: Disable cache aware scheduling for processes with high thread counts Tim Chen
2025-10-22 17:21 ` Madadi Vineeth Reddy
2025-10-23 6:55 ` Chen, Yu C
2025-10-11 18:24 ` [PATCH 18/19] sched/fair: Avoid cache-aware scheduling for memory-heavy processes Tim Chen
2025-10-15 6:57 ` kernel test robot
2025-10-16 4:44 ` Chen, Yu C
2025-10-11 18:24 ` [PATCH 19/19] sched/fair: Add user control to adjust the tolerance of cache-aware scheduling Tim Chen
2025-10-29 8:07 ` Aaron Lu
2025-10-29 12:54 ` Chen, Yu C
2025-10-14 12:13 ` [PATCH 00/19] Cache Aware Scheduling Madadi Vineeth Reddy
2025-10-14 21:48 ` Tim Chen
2025-10-15 5:38 ` Chen, Yu C
2025-10-15 18:26 ` Madadi Vineeth Reddy
2025-10-16 4:57 ` Chen, Yu C
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202510231406.30bc8aec-lkp@intel.com \
--to=oliver.sang@intel.com \
--cc=adamli@os.amperecomputing.com \
--cc=aubrey.li@intel.com \
--cc=aubrey.li@linux.intel.com \
--cc=bsegall@google.com \
--cc=cyy@cyyself.name \
--cc=dietmar.eggemann@arm.com \
--cc=gautham.shenoy@amd.com \
--cc=hdanton@sina.com \
--cc=jianyong.wu@outlook.com \
--cc=juri.lelli@redhat.com \
--cc=kprateek.nayak@amd.com \
--cc=len.brown@intel.com \
--cc=libo.chen@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lkp@intel.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=oe-lkp@lists.linux.dev \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=sshegde@linux.ibm.com \
--cc=tim.c.chen@intel.com \
--cc=tim.c.chen@linux.intel.com \
--cc=tingyin.duan@gmail.com \
--cc=vernhao@tencent.com \
--cc=vincent.guittot@linaro.org \
--cc=vineethr@linux.ibm.com \
--cc=vschneid@redhat.com \
--cc=yu.c.chen@intel.com \
--cc=yu.chen.surf@gmail.com \
--cc=zhao1.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).